Updates from: 03/04/2022 02:09:47
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 06/18/2021 Last updated : 03/03/2022
The authorization code flow for single page applications requires some additiona
The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow. ## 1. Get an authorization code
-The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following three examples (with line breaks for readability) each use a different user flow. If you're testing this GET HTTP request, use your browser.
+The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following examples (with line breaks for readability) shows how to acquire an authorization code. If you're testing this GET HTTP request, use your browser.
```http
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. | | state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | | prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. |
-| code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. |
-| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If excluded, `code_challenge` is assumed to be plaintext if `code_challenge` is included. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
+| code_challenge | recommended / required | Used to secure authorization code grants via Proof Key for Code Exchange (PKCE). Required if `code_challenge_method` is included. You need to add logic in your application to generate the `code_verifier` and `code_challenge`. The `code_challenge` is a Base64 URL-encoded SHA256 hash of the `code_verifier`. You store the `code_verifier` in your application for later use, and send the `code_challenge` along with the authorization request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is now recommended for all application types - native apps, SPAs, and confidential clients like web apps. |
+| `code_challenge_method` | recommended / required | The method used to encode the `code_verifier` for the `code_challenge` parameter. This *SHOULD* be `S256`, but the spec allows the use of `plain` if for some reason the client cannot support SHA256. <br/><br/>If you exclude the `code_challenge_method`, but still include the `code_challenge`, then the `code_challenge` is assumed to be plaintext. Microsoft identity platform supports both `plain` and `S256`. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). This is required for [single page apps using the authorization code flow](tutorial-register-spa.md).|
| login_hint | No| Can be used to pre-fill the sign-in name field of the sign-in page. For more information, see [Prepopulate the sign-in name](direct-signin.md#prepopulate-the-sign-in-name). | | domain_hint | No| Provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. If a valid value is included, the user goes directly to the identity provider sign-in page. For more information, see [Redirect sign-in to a social provider](direct-signin.md#redirect-sign-in-to-a-social-provider). | | Custom parameters | No| Custom parameters that can be used with [custom policies](custom-policy-overview.md). For example, [dynamic custom page content URI](customize-ui-with-html.md?pivots=b2c-custom-policy#configure-dynamic-custom-page-content-uri), or [key-value claim resolvers](claim-resolver-overview.md#oauth2-key-value-parameters). |
grant_type=authorization_code&client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6&sco
| client_secret | Yes, in Web Apps | The application secret that was generated in the [Azure portal](https://portal.azure.com/). Client secrets are used in this flow for Web App scenarios, where the client can securely store a client secret. For Native App (public client) scenarios, client secrets cannot be securely stored, and therefore are not used in this call. If you use a client secret, please change it on a periodic basis. | | grant_type |Required |The type of grant. For the authorization code flow, the grant type must be `authorization_code`. | | scope |Required |A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. |
-| code |Required |The authorization code that you acquired in the first leg of the flow. |
+| code |Required |The authorization code that you acquired in from the `/authorize` endpoint. |
| redirect_uri |Required |The redirect URI of the application where you received the authorization code. |
-| code_verifier | recommended | The same code_verifier that was used to obtain the authorization_code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
+| code_verifier | recommended | The same `code_verifier` used to obtain the authorization code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). |
If you're testing this POST HTTP request, you can use any HTTP client such as [Microsoft PowerShell](/powershell/scripting/overview) or [Postman](https://www.postman.com/).
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Previously updated : 12/09/2021 Last updated : 03/03/2022
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 12/09/2021 Last updated : 03/03/2022
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
You should now see BindID as a new OIDC Identity provider listed within your B2C
11. Select **Run user flow**.
-12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user will receive a push notification to the registered user mobile device for a Fast Identity Online (FIDO2) certified authentication. It can be a user finger print, biometric or decentralized pin.
+12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user enters the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint.
13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
active-directory-b2c Tokens Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md
Previously updated : 02/11/2022 Last updated : 03/03/2022
Azure AD B2C supports the [OAuth 2.0 and OpenID Connect protocols](protocols-ove
The following tokens are used in communication with Azure AD B2C: -- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They are commonly used to display account information or to make access control decisions in an application. ID tokens are signed, but they are not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.-- **Access token** - A JWT that contains claims that you can use to identify the granted permissions to your APIs. Access tokens are signed, but they aren't encrypted. Access tokens are used to provide access to APIs and resource servers. When your API receives an access token, it must validate the signature to prove that the token is authentic. Your API must also validate a few claims in the token to prove that it is valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.-- **Refresh token** - Refresh tokens are used to acquire new ID tokens and access tokens in an OAuth 2.0 flow. They provide your application with long-term access to resources on behalf of users without requiring interaction with those users. Refresh tokens are opaque to your application. They are issued by Azure AD B2C and can be inspected and interpreted only by Azure AD B2C. They are long-lived, but your application shouldn't be written with the expectation that a refresh token will last for a specific period of time. Refresh tokens can be invalidated at any moment for a variety of reasons. The only way for your application to know if a refresh token is valid is to attempt to redeem it by making a token request to Azure AD B2C. When you redeem a refresh token for a new token, you receive a new refresh token in the token response. Save the new refresh token. It replaces the refresh token that you previously used in the request. This action helps guarantee that your refresh tokens remain valid for as long as possible. Note that single-page applications using the authorization code flow with PKCE always have a refresh token lifetime of 24 hours. [Learn more about the security implications of refresh tokens in the browser](../active-directory/develop/reference-third-party-cookies-spas.md#security-implications-of-refresh-tokens-in-the-browser).
+- **ID token** - A JWT that contains claims that you can use to identify users in your application. This token is securely sent in HTTP requests for communication between two components of the same application or service. You can use the claims in an ID token as you see fit. They're commonly used to display account information or to make access control decisions in an application. ID tokens are signed, but the're not encrypted. When your application or API receives an ID token, it must validate the signature to prove that the token is authentic. Your application or API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
+
+- **Access token** - A JWT that contains claims that you can use to identify the granted permissions to your APIs. Access tokens are signed, but they aren't encrypted. Access tokens are used to provide access to APIs and resource servers. When your API receives an access token, it must validate the signature to prove that the token is authentic. Your API must also validate a few claims in the token to prove that it's valid. Depending on the scenario requirements, the claims validated by an application can vary, but your application must perform some common claim validations in every scenario.
+
+- **Refresh token** - Refresh tokens are used to acquire new ID tokens and access tokens in an OAuth 2.0 flow. They provide your application with long-term access to resources on behalf of users without requiring interaction with those users. Refresh tokens are opaque to your application. They're issued by Azure AD B2C and can be inspected and interpreted only by Azure AD B2C. They're long-lived, but your application shouldn't be written with the expectation that a refresh token will last for a specific period of time. Refresh tokens can be invalidated at any moment for a variety of reasons. The only way for your application to know if a refresh token is valid is to attempt to redeem it by making a token request to Azure AD B2C. When you redeem a refresh token for a new token, you receive a new refresh token in the token response. Save the new refresh token. It replaces the refresh token that you previously used in the request. This action helps guarantee that your refresh tokens remain valid for as long as possible. Single-page applications using the authorization code flow with PKCE always have a refresh token lifetime of 24 hours. [Learn more about the security implications of refresh tokens in the browser](../active-directory/develop/reference-third-party-cookies-spas.md#security-implications-of-refresh-tokens-in-the-browser).
## Endpoints
The metadata document for the `B2C_1_signupsignin1` policy in the `contoso.onmic
https://contoso.b2clogin.com/contoso.onmicrosoft.com/b2c_1_signupsignin1/v2.0/.well-known/openid-configuration ```
-To determine which policy was used to sign a token (and where to go to request the metadata), you have two options. First, the policy name is included in the `tfp` (default) or `acr` claim (as configured) in the token. You can parse claims out of the body of the JWT by base-64 decoding the body and deserializing the JSON string that results. The `tfp` or `acr` claim is the name of the policy that was used to issue the token. The other option is to encode the policy in the value of the `state` parameter when you issue the request, and then decode it to determine which policy was used. Either method is valid.
+To determine which policy was used to sign a token (and where to go to request the metadata), you've two options. First, the policy name is included in the `tfp` (default) or `acr` claim (as configured) in the token. You can parse claims out of the body of the JWT by base-64 decoding the body and deserializing the JSON string that results. The `tfp` or `acr` claim is the name of the policy that was used to issue the token. The other option is to encode the policy in the value of the `state` parameter when you issue the request, and then decode it to determine which policy was used. Either method is valid.
Azure AD B2C uses the RS256 algorithm, which is based on the [RFC 3447](https://www.rfc-editor.org/rfc/rfc3447#section-3.1) specification. The public key consists of two components: the RSA modulus (`n`) and the RSA public exponent (`e`). You can programmatically convert `n` and `e` values to a certificate format for token validation.
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Previously updated : 11/02/2021 Last updated : 03/03/2022
Welcome to what's new in Azure Active Directory B2C documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the B2C service, see [What's new in Azure Active Directory](../active-directory/fundamentals/whats-new.md).
+## February 2022
+
+### New articles
+
+- [Configure authentication in a sample Node.js web application by using Azure Active Directory B2C](configure-a-sample-node-web-app.md)
+- [Configure authentication in a sample Node.js web API by using Azure Active Directory B2C](configure-authentication-in-sample-node-web-app-with-api.md)
+- [Enable authentication options in a Node.js web app by using Azure Active Directory B2C](enable-authentication-in-node-web-app-options.md)
+- [Enable Node.js web API authentication options using Azure Active Directory B2C](enable-authentication-in-node-web-app-with-api-options.md)
+- [Enable authentication in your own Node.js web API by using Azure Active Directory B2C](enable-authentication-in-node-web-app-with-api.md)
+- [Enable authentication in your own Node web application using Azure Active Directory B2C](enable-authentication-in-node-web-app.md)
+
+### Updated articles
+
+- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)
+- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)
+- [Define a self-asserted technical profile in an Azure Active Directory B2C custom policy](self-asserted-technical-profile.md)
+- [About claim resolvers in Azure Active Directory B2C custom policies](claim-resolver-overview.md)
+- [Date claims transformations](date-transformations.md)
+- [Integer claims transformations](integer-transformations.md)
+- [JSON claims transformations](json-transformations.md)
+- [Define phone number claims transformations in Azure AD B2C](phone-number-claims-transformations.md)
+- [Social accounts claims transformations](social-transformations.md)
+- [String claims transformations](string-transformations.md)
+- [Web sign in with OpenID Connect in Azure Active Directory B2C](openid-connect.md)
+ ## January 2022 ### Updated articles
Welcome to what's new in Azure Active Directory B2C documentation. This article
- [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md) - [Define an Azure AD MFA technical profile in an Azure AD B2C custom policy](multi-factor-auth-technical-profile.md) - [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [String claims transformations](string-transformations.md)-
-## November 2021
-
-### Updated articles
--- [Define an OAuth2 technical profile in an Azure Active Directory B2C custom policy](oauth2-technical-profile.md)-- [Error codes: Azure Active Directory B2C](error-codes.md)-- [Configure authentication options in an Android app by using Azure AD B2C](enable-authentication-android-app-options.md)-- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md)--
-## October 2021
-
-### New articles
--- [Tutorial: Configure IDEMIA with Azure Active Directory B2C for relying party to consume IDEMIA or US State issued mobile identity credentials (Preview)](partner-idemia.md)-- [Tutorial: Extend Azure Active Directory B2C to protect on-premises applications using F5 BIG-IP](partner-f5.md)-- [Roles and resource access control](roles-resource-access-control.md)-- [Supported Azure AD features](supported-azure-ad-features.md)-
-### Updated articles
--- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)-- [Tutorial: Extend Azure Active Directory B2C to protect on-premises applications using F5 BIG-IP](partner-f5.md)-- [Set up sign-up and sign-in with generic OpenID Connect using Azure Active Directory B2C](identity-provider-generic-openid-connect.md)-- [RelyingParty](relyingparty.md)-- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)-- [Collect Azure Active Directory B2C logs with Application Insights](troubleshoot-with-application-insights.md)-- [Troubleshoot Azure AD B2C custom policies and user flows](troubleshoot.md)-- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)-- [Tutorial: Configure security analytics for Azure Active Directory B2C data with Azure Sentinel](azure-sentinel.md)-- [What is Azure Active Directory B2C?](overview.md)-- [Quickstart: Set up sign in for a single-page app using Azure Active Directory B2C](quickstart-single-page-app.md)-- [Quickstart: Set up sign in for an ASP.NET application using Azure Active Directory B2C](quickstart-web-app-dotnet.md)-- [Solutions and Training for Azure Active Directory B2C](solution-articles.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Register a SAML application in Azure AD B2C](saml-service-provider.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)----
-## September 2021
-
-### Updated articles
--- [Page layout versions](page-layout.md)-- [Tutorial: Create an Azure Active Directory B2C tenant](tutorial-create-tenant.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Secure your API used an API connector in Azure AD B2C](secure-rest-api.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)-- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)-- [Define custom attributes in Azure Active Directory B2C](user-flow-custom-attributes.md)-- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)-- [Configure itsme OpenID Connect (OIDC) with Azure Active Directory B2C](partner-itsme.md)-- [Tutorial: Configure Keyless with Azure Active Directory B2C](partner-keyless.md)-- [Tutorial: Configure Nok Nok with Azure Active Directory B2C to enable passwordless FIDO2 authentication](partner-nok-nok.md)-- [Tutorial for configuring Saviynt with Azure Active Directory B2C](partner-saviynt.md)-- [Integrating Trusona with Azure Active Directory B2C](partner-trusona.md)-- [Integrating Twilio Verify App with Azure Active Directory B2C](partner-twilio.md)-- [Configure complexity requirements for passwords in Azure Active Directory B2C](password-complexity.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)-- [Set up sign-up and sign-in with a Google account using Azure Active Directory B2C](identity-provider-google.md)-- [Set up sign-up and sign-in with a ID.me account using Azure Active Directory B2C](identity-provider-id-me.md)-- [Set up sign-up and sign-in with a LinkedIn account using Azure Active Directory B2C](identity-provider-linkedin.md)-- [Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C](identity-provider-qq.md)-- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)-- [Set up sign-up and sign-in with a Salesforce account using Azure Active Directory B2C](identity-provider-salesforce.md)-- [Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C](identity-provider-twitter.md)-- [Set up sign-up and sign-in with a WeChat account using Azure Active Directory B2C](identity-provider-wechat.md)-- [Set up sign-up and sign-in with a Weibo account using Azure Active Directory B2C](identity-provider-weibo.md)-- [Pass an identity provider access token to your application in Azure Active Directory B2C](idp-pass-through-user-flow.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)-- [Set up sign-up and sign-in with a Facebook account using Azure Active Directory B2C](identity-provider-facebook.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Billing model for Azure Active Directory B2C](billing.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)--
-## August 2021
-
-### New articles
--- [Deploy custom policies with GitHub Actions](deploy-custom-policies-github-action.md)-- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)-- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)-- [Add AD FS as a SAML identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs-saml.md)-- [Configure authentication in a sample Python web application using Azure Active Directory B2C](configure-authentication-sample-python-web-app.md)-- [Configure authentication options in a Python web application using Azure Active Directory B2C](enable-authentication-python-web-app-options.md)-- [Tutorial: How to perform security analytics for Azure AD B2C data with Azure Sentinel](azure-sentinel.md)-- [Enrich tokens with claims from external sources using API connectors](add-api-connector-token-enrichment.md)-
-### Updated articles
--- [Customize the user interface with HTML templates in Azure Active Directory B2C](customize-ui-with-html.md)-- [Configure authentication in a sample WPF desktop app by using Azure AD B2C](configure-authentication-sample-wpf-desktop-app.md)-- [Enable authentication options in a WPF desktop app by using Azure AD B2C](enable-authentication-wpf-desktop-app-options.md)-- [Configure authentication in a sample iOS Swift app by using Azure AD B2C](configure-authentication-sample-ios-app.md)-- [Enable authentication options in an iOS Swift app by using Azure AD B2C](enable-authentication-ios-app-options.md)-- [Enable authentication in your own iOS Swift app by using Azure AD B2C](enable-authentication-ios-app.md)-- [Add a web API application to your Azure Active Directory B2C tenant](add-web-api-application.md)-- [Configure authentication in a sample Android app by using Azure AD B2C](configure-authentication-sample-android-app.md)-- [Configure authentication options in an Android app by using Azure AD B2C](enable-authentication-android-app-options.md)-- [Enable authentication in your own Android app by using Azure AD B2C](enable-authentication-android-app.md)-- [Configure authentication in a sample web app by using Azure AD B2C](configure-authentication-sample-web-app.md)-- [Enable authentication options in a web app by using Azure AD B2C](enable-authentication-web-application-options.md)-- [Enable authentication in your own web app by using Azure AD B2C](enable-authentication-web-application.md)-- [Configure authentication options in a single-page application by using Azure AD B2C](enable-authentication-spa-app-options.md)-- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Add AD FS as an OpenID Connect identity provider using custom policies in Azure Active Directory B2C](identity-provider-adfs.md)-- [Configure SAML identity provider options with Azure Active Directory B2C](identity-provider-generic-saml-options.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Tutorial: Configure Azure Active Directory B2C with BlokSec for passwordless authentication](partner-bloksec.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)--
-## July 2021
-
-### New articles
--- [Configure authentication in a sample Angular Single Page application using Azure Active Directory B2C](configure-authentication-sample-angular-spa-app.md)-- [Configure authentication in a sample iOS Swift application using Azure Active Directory B2C](configure-authentication-sample-ios-app.md)-- [Configure authentication options in an Angular application using Azure Active Directory B2C](enable-authentication-angular-spa-app-options.md)-- [Enable authentication in your own Angular Application using Azure Active Directory B2C](enable-authentication-angular-spa-app.md)-- [Configure authentication options in an iOS Swift application using Azure Active Directory B2C](enable-authentication-ios-app-options.md)-- [Enable authentication in your own iOS Swift application using Azure Active Directory B2C](enable-authentication-ios-app.md)-
-### Updated articles
--- [Customize the user interface in Azure Active Directory B2C](customize-ui.md)-- [Integer claims transformations](integer-transformations.md)-- [Enable JavaScript and page layout versions in Azure Active Directory B2C](javascript-and-page-layout.md)-- [Monitor Azure AD B2C with Azure Monitor](azure-monitor.md)-- [Page layout versions](page-layout.md)-- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)--
-## June 2021
-
-### New articles
--- [Enable authentication in your own web API using Azure Active Directory B2C](enable-authentication-web-api.md)-- [Enable authentication in your own Single Page Application using Azure Active Directory B2C](enable-authentication-spa-app.md)-- [Publish your Azure AD B2C app to the Azure AD app gallery](publish-app-to-azure-ad-app-gallery.md)-- [Configure authentication in a sample Single Page application using Azure Active Directory B2C](configure-authentication-sample-spa-app.md)-- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C](configure-authentication-sample-web-app-with-api.md)-- [Configure authentication in a sample Single Page application using Azure Active Directory B2C options](enable-authentication-spa-app-options.md)-- [Configure authentication in a sample web application that calls a web API using Azure Active Directory B2C options](enable-authentication-web-app-with-api-options.md)-- [Enable authentication in your own web application that calls a web API using Azure Active Directory B2C](enable-authentication-web-app-with-api.md)-- [Sign-in options in Azure AD B2C](sign-in-options.md)-
-### Updated articles
--- [User profile attributes](user-profile-attributes.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)-- [Set up a sign-in flow in Azure Active Directory B2C](add-sign-in-policy.md)-- [Set up a sign-up and sign-in flow in Azure Active Directory B2C](add-sign-up-and-sign-in-policy.md)-- [Set up the local account identity provider](identity-provider-local.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Add user attributes and customize user input in Azure Active Directory B2C](configure-user-input.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)--
-## May 2021
-
-### New articles
--- [Define an OAuth2 custom error technical profile in an Azure Active Directory B2C custom policy](oauth2-error-technical-profile.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C](configure-authentication-sample-web-app.md)-- [Configure authentication in a sample web application using Azure Active Directory B2C options](enable-authentication-web-application-options.md)-- [Enable authentication in your own web application using Azure Active Directory B2C](enable-authentication-web-application.md)-- [Azure Active Directory B2C TLS and cipher suite requirements](https-cipher-tls-requirements.md)-
-### Updated articles
--- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Mitigate credential attacks in Azure AD B2C](threat-management.md)-- [Azure Active Directory B2C service limits and restrictions](service-limits.md)--
-## April 2021
-
-### New articles
--- [Set up sign-up and sign-in with a eBay account using Azure Active Directory B2C](identity-provider-ebay.md)-- [Clean up resources and delete the tenant](tutorial-delete-tenant.md)-- [Define a Conditional Access technical profile in an Azure Active Directory B2C custom policy](conditional-access-technical-profile.md)-- [Manage your Azure Active Directory B2C tenant](tenant-management.md)-
-### Updated articles
--- [Developer notes for Azure Active Directory B2C](custom-policy-developer-notes.md)-- [Add an API connector to a sign-up user flow](add-api-connector.md)-- [Walkthrough: Add REST API claims exchanges to custom policies in Azure Active Directory B2C](add-api-connector-token-enrichment.md)-- [Secure your API Connector](secure-rest-api.md)-- [Use API connectors to customize and extend sign-up user flows](api-connectors-overview.md)-- [Technical and feature overview of Azure Active Directory B2C](technical-overview.md)-- [Overview of policy keys in Azure Active Directory B2C](policy-keys-overview.md)-- [Custom email verification with Mailjet](custom-email-mailjet.md)-- [Custom email verification with SendGrid](custom-email-sendgrid.md)-- [Tutorial: Create user flows in Azure Active Directory B2C](tutorial-create-user-flows.md)-- [Azure AD B2C custom policy overview](custom-policy-overview.md)-- [User flows and custom policies overview](user-flow-overview.md)-- [Set up phone sign-up and sign-in for user flows](phone-authentication-user-flows.md)-- [Enable multifactor authentication in Azure Active Directory B2C](multi-factor-authentication.md)-- [User flow versions in Azure Active Directory B2C](user-flow-versions.md)--
-## March 2021
-
-### New articles
--- [Enable custom domains for Azure Active Directory B2C](custom-domain.md)-- [Investigate risk with Identity Protection in Azure AD B2C](identity-protection-investigate-risk.md)-- [Set up sign-up and sign-in with an Apple ID using Azure Active Directory B2C (Preview)](identity-provider-apple-id.md)-- [Set up a force password reset flow in Azure Active Directory B2C](force-password-reset.md)-- [Embedded sign-in experience](embedded-login.md)-
-### Updated articles
--- [Set up sign-up and sign-in with an Amazon account using Azure Active Directory B2C](identity-provider-amazon.md)-- [Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C](identity-provider-salesforce-saml.md)-- [Migrate an OWIN-based web API to b2clogin.com or a custom domain](multiple-token-endpoints.md)-- [Technical profiles](technicalprofiles.md)-- [Add Conditional Access to user flows in Azure Active Directory B2C](conditional-access-user-flow.md)-- [Set up a password reset flow in Azure Active Directory B2C](add-password-reset-policy.md)-- [RelyingParty](relyingparty.md)--
-## February 2021
-
-### New articles
--- [Securing phone-based multifactor authentication (MFA)](phone-based-mfa.md)-
-### Updated articles
--- [Azure Active Directory B2C code samples](integrate-with-app-code-samples.md)-- [Track user behavior in Azure AD B2C by using Application Insights](analytics-with-application-insights.md)-- [Configure session behavior in Azure Active Directory B2C](session-behavior.md)
+- [String claims transformations](string-transformations.md)
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Last updated 06/14/2021
-+
active-directory Concept Sspr Licensing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-licensing.md
Last updated 07/13/2021
-+
active-directory Concept Sspr Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-policy.md
Last updated 06/25/2021
-+
active-directory Howto Registration Mfa Sspr Combined Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-registration-mfa-sspr-combined-troubleshoot.md
Last updated 01/19/2021
-+
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
Last updated 10/05/2020
-+
active-directory Howto Sspr Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-customization.md
Last updated 07/17/2020
-+
active-directory Howto Sspr Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-deployment.md
Last updated 02/02/2022 -+ -+
active-directory Howto Sspr Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-reporting.md
Last updated 10/25/2021
-+
active-directory Howto Sspr Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-windows.md
Last updated 07/17/2020
-+
active-directory Troubleshoot Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr-writeback.md
Last updated 02/22/2022
-+
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr.md
Last updated 06/28/2021
-+
active-directory Tutorial Enable Sspr Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr-writeback.md
Last updated 11/11/2021
-+
active-directory Tutorial Enable Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/tutorial-enable-sspr.md
Last updated 1/05/2022 -+ # Customer intent: As an Azure AD Administrator, I want to learn how to enable and use self-service password reset so that my end-users can unlock their accounts or reset their passwords through a web browser.
active-directory Active Directory Authentication Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/active-directory-authentication-libraries.md
The Azure Active Directory Authentication Library (ADAL) v1.0 enables applicatio
> [!NOTE] > Looking for the Azure AD v2.0 libraries (MSAL)? Checkout the [MSAL library guide](../develop/reference-v2-libraries.md).
->
->
++
+> [!WARNING]
+> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
## Microsoft-supported Client Libraries
active-directory Sample V1 Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/azuread-dev/sample-v1-code.md
This section provides links to samples you can use to learn more about the Azure
> [!NOTE] > If you are interested in Azure AD V2 code samples, see [v2.0 code samples by scenario](../develop/sample-v2-code.md?toc=/azure/active-directory/azuread-dev/toc.json&bc=/azure/active-directory/azuread-dev/breadcrumb/toc.json).
+> [!WARNING]
+> Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](..\develop\msal-migration.md).
+ To understand the basic scenario for each sample type, see [Authentication scenarios for Azure AD](v1-authentication-scenarios.md). You can also contribute to our samples on GitHub. To learn how, see [Microsoft Azure Active Directory samples and documentation](https://github.com/Azure-Samples?page=3&query=active-directory).
active-directory Howto Get List Of All Active Directory Auth Library Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-get-list-of-all-active-directory-auth-library-apps.md
Previously updated : 07/22/2021 Last updated : 03/03/2022
# Get a complete list of apps using ADAL in your tenant
-Support for Active Directory Authentication Library (ADAL) will end on June 30, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant.
+Support for Active Directory Authentication Library (ADAL) will end in December, 2022. Apps using ADAL on existing OS versions will continue to work, but technical support and security updates will end. Without continued security updates, apps using ADAL will become increasingly vulnerable to the latest security attack patterns. For more information, see [Migrate apps to MSAL](msal-migration.md). This article provides guidance on how to use Azure Monitor workbooks to obtain a list of all apps that use ADAL in your tenant.
## Sign-ins workbook
active-directory Msal Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-migration.md
Previously updated : 07/22/2021 Last updated : 03/03/2022
If any of your applications use the Azure Active Directory Authentication Library (ADAL) for authentication and authorization functionality, it's time to migrate them to the [Microsoft Authentication Library (MSAL)](msal-overview.md#languages-and-frameworks). -- All Microsoft support and development for ADAL, including security fixes, ends on June 30, 2022.
+- All Microsoft support and development for ADAL, including security fixes, ends in December, 2022.
+- There are no ADAL feature releases or new platform version releases planned prior to December, 2022.
- No new features have been added to ADAL since June 30, 2020. > [!WARNING]
-> If you choose not to migrate to MSAL before ADAL support ends on June 30, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
+> If you choose not to migrate to MSAL before ADAL support ends in December, 2022, you put your app's security at risk. Existing apps that use ADAL will continue to work after the end-of-support date, but Microsoft will no longer release security fixes on ADAL.
## Why switch to MSAL?
MSAL provides multiple benefits over ADAL, including the following features:
|Features|MSAL|ADAL| |||| |**Security**|||
-|Security fixes beyond June 30, 2022|![Security fixes beyond June 30, 2022 - MSAL provides the feature][y]|![Security fixes beyond June 30, 2022 - ADAL doesn't provide the feature][n]|
+|Security fixes beyond December, 2022|![Security fixes beyond December, 2022 - MSAL provides the feature][y]|![Security fixes beyond December, 2022 - ADAL doesn't provide the feature][n]|
| Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support [Continuous Access Evaluation (CAE)](app-resilience-continuous-access-evaluation.md).|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - MSAL provides the feature][y]|![Proactively refresh and revoke tokens based on policy or critical events for Microsoft Graph and other APIs that support Continuous Access Evaluation (CAE) - ADAL doesn't provide the feature][n]| | Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) |![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - MSAL provides the feature][y]|![Standards compliant with OAuth v2.0 and OpenID Connect (OIDC) - ADAL doesn't provide the feature][n]| |**User accounts and experiences**|||
active-directory V2 Oauth2 Auth Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-auth-code-flow.md
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
| `response_type` | required | Must include `code` for the authorization code flow. Can also include `id_token` or `token` if using the [hybrid flow](#request-an-id-token-as-well-or-hybrid-flow). | | `redirect_uri` | required | The `redirect_uri` of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs you registered in the portal, except it must be URL-encoded. For native and mobile apps, use one of the recommended values: `https://login.microsoftonline.com/common/oauth2/nativeclient` for apps using embedded browsers or `http://localhost` for apps that use system browsers. | | `scope` | required | A space-separated list of [scopes](v2-permissions-and-consent.md) that you want the user to consent to. For the `/authorize` leg of the request, this parameter can cover multiple resources. This value allows your app to get consent for multiple web APIs you want to call. |
-| `response_mode` | recommended | Specifies the method that should be used to send the resulting token back to your app. It can be one of the following values:<br/><br/>- `query`<br/>- `fragment`<br/>- `form_post`<br/><br/>`query` provides the code as a query string parameter on your redirect URI. If you're requesting an ID token using the implicit flow, you can't use `query` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). If you're requesting just the code, you can use `query`, `fragment`, or `form_post`. `form_post` executes a POST containing the code to your redirect URI. |
+| `response_mode` | recommended | Specifies how the identity platform should return the requested token to your app. <br/><br/>Supported values:<br/><br/>- `query`: Default when requesting an access token. Provides the code as a query string parameter on your redirect URI. The `query` parameter is not supported when requesting an ID token by using the implicit flow. <br/>- `fragment`: Default when requesting an ID token by using the implicit flow. Also supported if requesting *only* a code.<br/>- `form_post`: Executes a POST containing the code to your redirect URI. Supported when requesting a code.<br/><br/> |
| `state` | recommended | A value included in the request that is also returned in the token response. It can be a string of any content that you wish. A randomly generated unique value is typically used for [preventing cross-site request forgery attacks](https://tools.ietf.org/html/rfc6749#section-10.12). The value can also encode information about the user's state in the app before the authentication request occurred. For instance, it could encode the page or view they were on. | | `prompt` | optional | Indicates the type of user interaction that is required. Valid values are `login`, `none`, `consent`, and `select_account`.<br/><br/>- `prompt=login` forces the user to enter their credentials on that request, negating single-sign on.<br/>- `prompt=none` is the opposite. It ensures that the user isn't presented with any interactive prompt. If the request can't be completed silently by using single-sign on, the Microsoft identity platform returns an `interaction_required` error.<br/>- `prompt=consent` triggers the OAuth consent dialog after the user signs in, asking the user to grant permissions to the app.<br/>- `prompt=select_account` interrupts single sign-on providing account selection experience listing all the accounts either in session or any remembered account or an option to choose to use a different account altogether.<br/> | | `login_hint` | optional | You can use this parameter to pre-fill the username and email address field of the sign-in page for the user. Apps can use this parameter during reauthentication, after already extracting the `login_hint` [optional claim](active-directory-optional-claims.md) from an earlier sign-in. |
client_id=6731de76-14a6-49ae-97bc-6eba6914391e
|`response_type`| required | The addition of `id_token` indicates to the server that the application would like an ID token in the response from the `/authorize` endpoint. | |`scope`| required | For ID tokens, this parameter must be updated to include the ID token scopes: `openid` and optionally `profile` and `email`. | |`nonce`| required| A value included in the request, generated by the app, that is included in the resulting `id_token` as a claim. The app can then verify this value to mitigate token replay attacks. The value is typically a randomized, unique string that can be used to identify the origin of the request. |
-|`response_mode`| recommended | Specifies the method that should be used to send the resulting token back to your app. Default value is `query` for just an authorization code, but `fragment` if the request includes an `id_token` `response_type`. We recommend apps use `form_post`, especially when using `http://localhost` as a redirect URI. |
+|`response_mode`| recommended | Specifies the method that should be used to send the resulting token back to your app. Default value is `query` for just an authorization code, but `fragment` if the request includes an `id_token` `response_type` as specified in the [OpenID spec](https://openid.net/specs/oauth-v2-multiple-response-types-1_0.html#Combinations). We recommend apps use `form_post`, especially when using `http://localhost` as a redirect URI. |
The use of `fragment` as a response mode causes issues for web apps that read the code from the redirect. Browsers don't pass the fragment to the web server. In these situations, apps should use the `form_post` response mode to ensure that all data is sent to the server.
active-directory Assign Local Admin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/assign-local-admin.md
To manage a Windows device, you need to be a member of the local administrators group. As part of the Azure Active Directory (Azure AD) join process, Azure AD updates the membership of this group on a device. You can customize the membership update to satisfy your business requirements. A membership update is, for example, helpful if you want to enable your helpdesk staff to do tasks requiring administrator rights on a device.
-This article explains how the local administrators membership update works and how you can customize it during an Azure AD Join. The content of this article doesn't apply to a **hybrid Azure AD joined** devices.
+This article explains how the local administrators membership update works and how you can customize it during an Azure AD Join. The content of this article doesn't apply to **hybrid Azure AD joined** devices.
## How it works
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
If your organization subscribes to the Azure Monitor service, you can use the [C
If your organization exports sign-in logs to a Security Information and Event Management (SIEM) system, you can retrieve required information from your SIEM system.
+## Identify changes to cross-tenant access settings
+
+The Azure AD audit logs capture all activity around cross-tenant access setting changes and activity. To audit changes to your cross-tenant access settings, use the **category** of ***CrossTenantAccessSettings*** to filter all activity to show changes to cross-tenant access settings.
+
+![Audit logs for cross-tenant access settings](media/cross-tenant-access-overview/cross-tenant-access-settings-audit-logs.png)
+ ## Next steps
-[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
+[Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
To support temporary passwords in Azure AD for synchronized users, you can enabl
#### Account expiration
-If your organization uses the accountExpires attribute as part of user account management, this attribute is not synchronized to Azure AD. As a result, an expired Active Directory account in an environment configured for password hash synchronization will still be active in Azure AD. We recommend that if the account is expired, a workflow action should trigger a PowerShell script that disables the user's Azure AD account (use the [Set-AzureADUser](/powershell/module/azuread/set-azureaduser) cmdlet). Conversely, when the account is turned on, the Azure AD instance should be turned on.
+If your organization uses the accountExpires attribute as part of user account management, this attribute is not synchronized to Azure AD. As a result, an expired Active Directory account in an environment configured for password hash synchronization will still be active in Azure AD. We recommend using a scheduled PowerShell script that disables users' AD accounts, once they expire (use the [Set-ADUser](/powershell/module/activedirectory/set-aduser) cmdlet). Conversely, during the process of removing the expiration from an AD account, the account should be re-enabled.
### Overwrite synchronized passwords
active-directory F5 Big Ip Kerberos Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-kerberos-easy-button.md
This section defines all properties that you would normally use to manually conf
When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
-As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+As our AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
![Screenshot for user attributes and claims](./media/f5-big-ip-kerberos-easy-button/user-attributes-claims.png)
active-directory F5 Big Ip Sap Erp Easy Button https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/f5-big-ip-sap-erp-easy-button.md
+
+ Title: Configure F5 BIG-IP Easy Button for SSO to SAP ERP
+description: Learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+++++++ Last updated : 3/1/2022++++
+# Tutorial: Configure F5ΓÇÖs BIG-IP Easy Button for SSO to SAP ERP
+
+In this article, learn to secure SAP ERP using Azure Active Directory (Azure AD), through F5ΓÇÖs BIG-IP Easy Button guided configuration.
+
+Integrating a BIG-IP with Azure Active Directory (Azure AD) provides many benefits, including:
+
+* [Improved Zero Trust governance](https://www.microsoft.com/security/blog/2020/04/02/announcing-microsoft-zero-trust-assessment-tool/) through Azure AD pre-authentication and [Conditional Access](/azure/active-directory/conditional-access/overview)
+
+* Full SSO between Azure AD and BIG-IP published services
+
+* Manage identities and access from a single control plane, the [Azure portal](https://portal.azure.com/)
+
+To learn about all of the benefits, see the article on [F5 BIG-IP and Azure AD integration](./f5-aad-integration.md) and [what is application access and single sign-on with Azure AD](/azure/active-directory/active-directory-appssoaccess-whatis).
+
+## Scenario description
+
+This scenario looks at the classic **SAP ERP application using Kerberos authentication** to manage access to protected content.
+
+Being legacy, the application lacks modern protocols to support a direct integration with Azure AD. The application can be modernized, but it is costly, requires careful planning, and introduces risk of potential downtime. Instead, an F5 BIG-IP Application Delivery Controller (ADC) is used to bridge the gap between the legacy application and the modern ID control plane, through protocol transitioning.
+
+Having a BIG-IP in front of the application enables us to overlay the service with Azure AD pre-authentication and headers-based SSO, significantly improving the overall security posture of the application.
+
+## Scenario architecture
+
+The SHA solution for this scenario is made up of the following:
+
+**SAP ERP application:** BIG-IP published service to be protected by and Azure AD SHA.
+
+**Azure AD:** Security Assertion Markup Language (SAML) Identity Provider (IdP) responsible for verification of user credentials, Conditional Access (CA), and SAML based SSO to the BIG-IP.
+
+**BIG-IP:** Reverse proxy and SAML service provider (SP) to the application, delegating authentication to the SAML IdP before performing header-based SSO to the SAP service.
+
+SHA for this scenario supports both SP and IdP initiated flows. The following image illustrates the SP initiated flow.
+
+![Secure hybrid access - SP initiated flow](./media/f5-big-ip-easy-button-sap-erp/sp-initiated-flow.png)
+
+| Steps| Description|
+| -- |-|
+| 1| User connects to application endpoint (BIG-IP) |
+| 2| BIG-IP APM access policy redirects user to Azure AD (SAML IdP) |
+| 3| Azure AD pre-authenticates user and applies any enforced Conditional Access policies |
+| 4| User is redirected to BIG-IP (SAML SP) and SSO is performed using issued SAML token |
+| 5| BIG-IP requests Kerberos ticket from KDC |
+| 6| BIG-IP sends request to backend application, along with Kerberos ticket for SSO |
+| 7| Application authorizes request and returns payload |
+
+## Prerequisites
+Prior BIG-IP experience isnΓÇÖt necessary, but you will need:
+
+* An Azure AD free subscription or above
+
+* An existing BIG-IP or [deploy a BIG-IP Virtual Edition (VE) in Azure](./f5-bigip-deployment-guide.md)
+
+* Any of the following F5 BIG-IP license offers
+
+ * F5 BIG-IP® Best bundle
+
+ * F5 BIG-IP APM standalone license
+
+ * F5 BIG-IP APM add-on license on an existing BIG-IP F5 BIG-IP® Local Traffic Manager™ (LTM)
+
+ * 90-day BIG-IP full feature [trial license](https://www.f5.com/trial/big-ip-trial.php).
+
+* User identities [synchronized](../hybrid/how-to-connect-sync-whatis.md) from an on-premises directory to Azure AD, or created directly within Azure AD and flowed back to your on-premises directory
+
+* An account with Azure AD Application admin [permissions](/azure/active-directory/users-groups-roles/directory-assign-admin-roles#application-administrator)
+
+* An [SSL Web certificate](./f5-bigip-deployment-guide.md) for publishing services over HTTPS, or use default BIG-IP certs while testing
+
+* An existing SAP ERP environment configured for Kerberos authentication
+
+## BIG-IP configuration methods
+
+There are many methods to configure BIG-IP for this scenario, including two template-based options and an advanced configuration. This tutorial covers the latest Guided Configuration 16.1 offering an Easy button template.
+
+With the Easy Button, admins no longer go back and forth between Azure AD and a BIG-IP to enable services for SHA. The deployment and policy management is handled directly between the APMΓÇÖs Guided Configuration wizard and Microsoft Graph. This rich integration between BIG-IP APM and Azure AD ensures that applications can quickly, easily support identity federation, SSO, and Azure AD Conditional Access, reducing administrative overhead.
+
+>[!NOTE]
+> All example strings or values referenced throughout this guide should be replaced with those for your actual environment.
+
+## Register Easy Button
+
+Before a client or service can access Microsoft Graph, it must be trusted by the [Microsoft identity platform.](/azure/active-directory/develop/quickstart-register-app)
+
+The Easy Button client must also be registered in Azure AD, before it is allowed to establish a trust between each SAML SP instance of a BIG-IP published application, and Azure AD as the SAML IdP.
+
+1. Sign-in to the [Azure AD portal](https://portal.azure.com/) using an account with Application Administrative rights
+
+2. From the left navigation pane, select the **Azure Active Directory** service
+
+3. Under Manage, select **App registrations > New registration**
+
+4. Enter a display name for your application. For example, *F5 BIG-IP Easy Button*
+
+5. Specify who can use the application > **Accounts in this organizational directory only**
+
+6. Select **Register** to complete the initial app registration
+
+7. Navigate to **API permissions** and authorize the following Microsoft Graph **Application permissions**:
+
+ * Application.Read.All
+ * Application.ReadWrite.All
+ * Application.ReadWrite.OwnedBy
+ * Directory.Read.All
+ * Group.Read.All
+ * IdentityRiskyUser.Read.All
+ * Policy.Read.All
+ * Policy.ReadWrite.ApplicationConfiguration
+ * Policy.ReadWrite.ConditionalAccess
+ * User.Read.All
+
+8. Grant admin consent for your organization
+
+9. In the **Certificates & Secrets** blade, generate a new **client secret** and note it down
+
+10. From the **Overview** blade, note the **Client ID** and **Tenant ID**
+
+## Configure Easy Button
+
+Initiate the APM's **Guided Configuration** to launch the **Easy Button** Template.
+
+1. From a browser, sign-in to the **F5 BIG-IP management console**
+
+2. Navigate to **Access > Guided Configuration > Microsoft Integration** and select **Azure AD Application**.
+
+ ![Screenshot for Configure Easy Button- Install the template](./media/f5-big-ip-easy-button-ldap/easy-button-template.png)
+
+3. Review the list of configuration steps and select **Next**
+
+ ![Screenshot for Configure Easy Button - List configuration steps](./media/f5-big-ip-easy-button-ldap/config-steps.png)
+
+4. Follow the sequence of steps required to publish your application.
+
+ ![Configuration steps flow](./media/f5-big-ip-easy-button-ldap/config-steps-flow.png#lightbox)
+
+### Configuration Properties
+
+These are general and service account properties. The **Configuration Properties** tab creates a BIG-IP application config and SSO object. Consider the **Azure Service Account Details** section to represent the client you registered in your Azure AD tenant earlier, as an application. These settings allow a BIG-IP's OAuth client to individually register a SAML SP directly in your tenant, along with the SSO properties you would normally configure manually. Easy Button does this for every BIG-IP service being published and enabled for SHA.
+
+Some of these are global settings so can be re-used for publishing more applications, further reducing deployment time and effort.
+
+1. Provide a unique **Configuration Name** so admins can easily distinguish between Easy Button configurations
+
+2. Enable **Single Sign-On (SSO) & HTTP Headers**
+
+3. Enter the **Tenant Id, Client ID,** and **Client Secret** you noted when registering the Easy Button client in your tenant
+
+4. Confirm the BIG-IP can successfully connect to your tenant and select **Next**
+
+ ![Screenshot for Configuration General and Service Account properties](./media/f5-big-ip-easy-button-sap-erp/configuration-general-and-service-account-properties.png)
+
+### Service Provider
+
+The Service Provider settings define the properties for the SAML SP instance of the application protected through SHA.
+
+1. Enter **Host**. This is the public FQDN of the application being secured
+
+2. Enter **Entity ID.** This is the identifier Azure AD will use to identify the SAML SP requesting a token
+
+ ![Screenshot for Service Provider settings](./media/f5-big-ip-easy-button-sap-erp/service-provider-settings.png)
+
+ The optional **Security Settings** specify whether Azure AD should encrypt issued SAML assertions. Encrypting assertions between Azure AD and the BIG-IP APM provides additional assurance that the content tokens canΓÇÖt be intercepted, and personal or corporate data be compromised.
+
+3. From the **Assertion Decryption Private Key** list, select **Create New**
+
+ ![Screenshot for Configure Easy Button- Create New import](./media/f5-big-ip-oracle/configure-security-create-new.png)
+
+4. Select **OK**. This opens the **Import SSL Certificate and Keys** dialog in a new tab
+
+5. Select **PKCS 12 (IIS)** to import your certificate and private key. Once provisioned close the browser tab to return to the main tab
+
+ ![Screenshot for Configure Easy Button- Import new cert](./media/f5-big-ip-easy-button-sap-erp/import-ssl-certificates-and-keys.png)
+
+6. Check **Enable Encrypted Assertion**
+
+7. If you have enabled encryption, select your certificate from the **Assertion Decryption Private Key** list. This is the private key for the certificate that BIG-IP APM will use to decrypt Azure AD assertions
+
+8. If you have enabled encryption, select your certificate from the **Assertion Decryption Certificate** list. This is the certificate that BIG-IP will upload to Azure AD for encrypting the issued SAML assertions
+
+ ![Screenshot for Service Provider security settings](./media/f5-big-ip-easy-button-ldap/service-provider-security-settings.png)
+
+### Azure Active Directory
+
+This section defines all properties that you would normally use to manually configure a new BIG-IP SAML application within your Azure AD tenant.
+
+Easy Button provides a set of pre-defined application templates for Oracle PeopleSoft, Oracle E-business Suite, Oracle JD Edwards, SAP ERP as well as generic SHA template for any other apps. For this scenario, select **SAP ERP Central Component > Add** to start the Azure configurations.
+
+ ![Screenshot for Azure configuration add BIG-IP application](./media/f5-big-ip-easy-button-sap-erp/azure-config-add-app.png)
+
+#### Azure Configuration
+
+1. Enter **Display Name** of app that the BIG-IP creates in your Azure AD tenant, and the icon that the users will see in [MyApps portal](https://myapplications.microsoft.com/)
+
+2. Leave the **Sign On URL (optional)** blank to enable IdP initiated sign-on
+
+ ![Screenshot for Azure configuration add display info](./media/f5-big-ip-easy-button-sap-erp/azure-configuration-add-display-info.png)
+
+3. Select the refresh icon next to the **Signing Key** and **Signing Certificate** to locate the certificate you imported earlier
+
+5. Enter the certificateΓÇÖs password in **Signing Key Passphrase**
+
+6. Enable **Signing Option** (optional). This ensures that BIG-IP only accepts tokens and claims that are signed by Azure AD
+
+ ![Screenshot for Azure configuration - Add signing certificates info](./media/f5-big-ip-easy-button-ldap/azure-configuration-sign-certificates.png)
+
+7. **User and User Groups** are dynamically queried from your Azure AD tenant and used to authorize access to the application. Add a user or group that you can use later for testing, otherwise all access will be denied
+
+ ![Screenshot for Azure configuration - Add users and groups](./media/f5-big-ip-easy-button-ldap/azure-configuration-add-user-groups.png)
+
+#### User Attributes & Claims
+
+When a user successfully authenticates to Azure AD, it issues a SAML token with a default set of claims and attributes uniquely identifying the user. The **User Attributes & Claims tab** shows the default claims to issue for the new application. It also lets you configure more claims.
+
+As our example AD infrastructure is based on a .com domain suffix used both, internally and externally, we donΓÇÖt require any additional attributes to achieve a functional KCD SSO implementation. See the [advanced tutorial](./f5-big-ip-kerberos-advanced.md) for cases where you have multiple domains or userΓÇÖs login using an alternate suffix.
+
+ ![Screenshot for user attributes and claims](./media/f5-big-ip-easy-button-sap-erp/user-attributes-claims.png)
+
+You can include additional Azure AD attributes, if necessary, but for this scenario SAP ERP only requires the default attributes.
+
+#### Additional User Attributes
+
+The **Additional User Attributes** tab can support a variety of distributed systems requiring attributes stored in other directories, for session augmentation. Attributes fetched from an LDAP source can then be injected as additional SSO headers to further control access based on roles, Partner IDs, etc.
+
+ ![Screenshot for additional user attributes](./media/f5-big-ip-easy-button-header/additional-user-attributes.png)
+
+>[!NOTE]
+>This feature has no correlation to Azure AD but is another source of attributes.
+
+#### Conditional Access Policy
+
+CA policies are enforced post Azure AD pre-authentication, to control access based on device, application, location, and risk signals.
+
+The **Available Policies** view, by default, will list all CA policies that do not include user based actions.
+
+The **Selected Policies** view, by default, displays all policies targeting All cloud apps. These policies cannot be deselected or moved to the Available Policies list as they are enforced at a tenant level.
+
+To select a policy to be applied to the application being published:
+
+1. Select the desired policy in the **Available Policies** list
+2. Select the right arrow and move it to the **Selected Policies** list
+
+Selected policies should either have an **Include** or **Exclude** option checked. If both options are checked, the selected policy is not enforced.
+
+![ Screenshot for CA policies](./media/f5-big-ip-easy-button-ldap/conditional-access-policy.png)
+
+>[!NOTE]
+>The policy list is enumerated only once when first switching to this tab. A refresh button is available to manually force the wizard to query your tenant, but this button is displayed only when the application has been deployed.
+
+### Virtual Server Properties
+
+A virtual server is a BIG-IP data plane object represented by a virtual IP address listening for client requests to the application. Any received traffic is processed and evaluated against the APM profile associated with the virtual server, before being directed according to the policy results and settings.
+
+1. Enter **Destination Address**. This is any available IPv4/IPv6 address that the BIG-IP can use to receive client traffic. A corresponding record should also exist in DNS, enabling clients to resolve the external URL of your BIG-IP published application to this IP, instead of the appllication itself. Using a test PC's localhost DNS is fine for testing
+
+2. Enter **Service Port** as *443* for HTTPS
+
+3. Check **Enable Redirect Port** and then enter **Redirect Port**. It redirects incoming HTTP client traffic to HTTPS
+
+4. The Client SSL Profile enables the virtual server for HTTPS, so that client connections are encrypted over TLS. Select the **Client SSL Profile** you created as part of the prerequisites or leave the default whilst testing
+
+ ![ Screenshot for Virtual server](./media/f5-big-ip-easy-button-ldap/virtual-server.png)
+
+### Pool Properties
+
+The **Application Pool tab** details the services behind a BIG-IP, represented as a pool containing one or more application servers.
+
+1. Choose from **Select a Pool.** Create a new pool or select an existing one
+
+2. Choose the **Load Balancing Method** as *Round Robin*
+
+3. For **Pool Servers** select an existing server node or specify an IP and port for the backend node hosting the header-based application
+
+ ![ Screenshot for Application pool](./media/f5-big-ip-easy-button-ldap/application-pool.png)
+
+#### Single Sign-On & HTTP Headers
+
+Enabling SSO allows users to access BIG-IP published services without having to enter credentials. The **Easy Button wizard** supports Kerberos, OAuth Bearer, and HTTP authorization headers for SSO. You will need the Kerberos delegation account created earlier to complete this step.
+
+Enable **Kerberos** and **Show Advanced Setting** to enter the following:
+
+* **Username Source:** Specifies the preferred username to cache for SSO. You can provide any session variable as the source of the user ID, but *session.saml.last.identity* tends to work best as it holds the Azure AD claim containing the logged in user ID
+
+* **User Realm Source:** Required if the user domain is different to the BIG-IPΓÇÖs kerberos realm. In that case, the APM session variable would contain the logged in user domain. For example,*session.saml.last.attr.name.domain*
+
+ ![Screenshot for SSO and HTTP headers](./media/f5-big-ip-kerberos-easy-button/sso-headers.png)
+
+* **KDC:** IP of a Domain Controller (Or FQDN if DNS is configured & efficient)
+
+* **UPN Support:** Enable for the APM to use the UPN for kerberos ticketing
+
+* **SPN Pattern:** Use HTTP/%h to inform the APM to use the host header of the client request and build the SPN that it is requesting a kerberos token for.
+
+* **Send Authorization:** Disable for applications that prefer negotiating authentication instead of receiving the kerberos token in the first request. For example, *Tomcat.*
+
+ ![Screenshot for SSO method configuration](./media/f5-big-ip-kerberos-easy-button/sso-method-config.png)
+
+### Session Management
+
+The BIG-IPs session management settings are used to define the conditions under which user sessions are terminated or allowed to continue, limits for users and IP addresses, and corresponding user info. Consult [F5 documentation]( https://support.f5.com/csp/article/K18390492) for details on these settings.
+
+What isnΓÇÖt covered however is Single Log-Out (SLO) functionality, which ensures all sessions between the IdP, the BIG-IP, and the user agent are terminated as users log off.
+ When the Easy Button deploys a SAML application to your Azure AD tenant, it also populates the Logout Url with the APMΓÇÖs SLO endpoint. That way IdP initiated sign-outs from the Microsoft [MyApps portal]( https://support.microsoft.com/en-us/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510) also terminate the session between the BIG-IP and a client.
+
+During deployment, the SAML federation metadata for the published application is imported from your tenant, providing the APM the SAML logout endpoint for Azure AD. This helps SP initiated sign-outs terminate the session between a client and Azure AD.
+
+## Summary
+
+This last step provides a breakdown of your configurations. Select **Deploy** to commit all settings and verify that the application now exists in your tenants list of Enterprise applications.
+
+## Next steps
+
+From a browser, **connect** to the applicationΓÇÖs external URL or select the **applicationΓÇÖs icon** in the [Microsoft MyApps portal](https://myapps.microsoft.com/). After authenticating to Azure AD, youΓÇÖll be redirected to the BIG-IP virtual server for the application and automatically signed in through SSO.
+
+For increased security, organizations using this pattern could also consider blocking all direct access to the application, thereby forcing a strict path through the BIG-IP.
+
+## Advanced deployment
+
+There may be cases where the Guided Configuration templates lacks the flexibility to achieve more specific requirements. For those scenarios, see [Advanced Configuration for kerberos-based SSO](./f5-big-ip-kerberos-advanced.md).
+
+Alternatively, the BIG-IP gives you the option to disable **Guided ConfigurationΓÇÖs strict management mode**. This allows you to manually tweak your configurations, even though bulk of your configurations are automated through the wizard-based templates.
+
+You can navigate to **Access > Guided Configuration** and select the **small padlock icon** on the far right of the row for your applicationsΓÇÖ configs.
+
+ ![Screenshot for Configure Easy Button - Strict Management](./media/f5-big-ip-oracle/strict-mode-padlock.png)
+
+At that point, changes via the wizard UI are no longer possible, but all BIG-IP objects associated with the published instance of the application will be unlocked for direct management.
+
+>[!NOTE]
+>Re-enabling strict mode and deploying a configuration will overwrite any settings performed outside of the Guided Configuration UI, therefore we recommend the advanced configuration method for production services.
+
+## Troubleshooting
+
+You can fail to access the SHA protected application due to any number of factors, including a misconfiguration.
+
+* Kerberos is time sensitive, so requires that servers and clients be set to the correct time and where possible synchronized to a reliable time source
+
+* Ensure the hostname for the domain controller and web application are resolvable in DNS
+
+* Ensure there are no duplicate SPNs in your AD environment by executing the following query at the command line on a domain PC: setspn -q HTTP/my_target_SPN
+
+You can refer to our [App Proxy guidance](../app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md) to validate an IIS application is configured appropriately for KCD. F5ΓÇÖs article on [how the APM handles Kerberos SSO](https://techdocs.f5.com/en-us/bigip-15-1-0/big-ip-access-policy-manager-single-sign-on-concepts-configuration/kerberos-single-sign-on-method.html) is also a valuable resource.
+
+### Log analysis
+
+BIG-IP logging can help quickly isolate all sorts of issues with connectivity, SSO, policy violations, or misconfigured variable mappings. Start troubleshooting by increasing the log verbosity level.
+
+1. Navigate to **Access Policy > Overview > Event Logs > Settings**
+
+2. Select the row for your published application, then **Edit > Access System Logs**
+
+3. Select **Debug** from the SSO list, and then select **OK**
+
+Reproduce your issue, then inspect the logs, but remember to switch this back when finished as verbose mode generates lots of data.
+
+If you see a BIG-IP branded error immediately after successful Azure AD pre-authentication, itΓÇÖs possible the issue relates to SSO from Azure AD to the BIG-IP.
+
+1. Navigate to **Access > Overview > Access reports**
+
+2. Run the report for the last hour to see logs provide any clues. The **View session variables** link for your session will also help understand if the APM is receiving the expected claims from Azure AD.
+
+If you donΓÇÖt see a BIG-IP error page, then the issue is probably more related to the backend request or SSO from the BIG-IP to the application.
+
+1. Navigate to **Access Policy > Overview > Active Sessions**
+
+2. Select the link for your active session. The **View Variables** link in this location may also help determine root cause KCD issues, particularly if the BIG-IP APM fails to obtain the right user and domain identifiers from session variables
+
+See [BIG-IP APM variable assign examples]( https://devcentral.f5.com/s/articles/apm-variable-assign-examples-1107) and [F5 BIG-IP session variables reference]( https://techdocs.f5.com/en-us/bigip-15-0-0/big-ip-access-policy-manager-visual-policy-editor/session-variables.html) for more info.
active-directory Allocadia Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/allocadia-tutorial.md
Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Allocadia | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Allocadia'
description: Learn how to configure single sign-on between Azure Active Directory and Allocadia.
Previously updated : 12/17/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Allocadia
+# Tutorial: Azure AD SSO integration with Allocadia
In this tutorial, you'll learn how to integrate Allocadia with Azure Active Directory (Azure AD). When you integrate Allocadia with Azure AD, you can:
In this tutorial, you'll learn how to integrate Allocadia with Azure Active Dire
* Enable your users to be automatically signed-in to Allocadia with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites To get started, you need the following items:
To get started, you need the following items:
In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Allocadia supports **IDP** initiated SSO
-* Allocadia supports **Just In Time** user provisioning
+* Allocadia supports **IDP** initiated SSO.
+* Allocadia supports **Just In Time** user provisioning.
-## Adding Allocadia from the gallery
+## Add Allocadia from the gallery
To configure the integration of Allocadia into Azure AD, you need to add Allocadia from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Allocadia** in the search box. 1. Select **Allocadia** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on for Allocadia
+## Configure and test Azure AD SSO for Allocadia
Configure and test Azure AD SSO with Allocadia using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Allocadia.
-To configure and test Azure AD SSO with Allocadia, complete the following building blocks:
+To configure and test Azure AD SSO with Allocadia, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- * **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- * **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
1. **[Configure Allocadia SSO](#configure-allocadia-sso)** - to configure the single sign-on settings on application side.
- * **[Create Allocadia test user](#create-allocadia-test-user)** - to have a counterpart of B.Simon in Allocadia that is linked to the Azure AD representation of user.
+ 1. **[Create Allocadia test user](#create-allocadia-test-user)** - to have a counterpart of B.Simon in Allocadia that is linked to the Azure AD representation of user.
1. **[Test SSO](#test-sso)** - to verify whether the configuration works. ## Configure Azure AD SSO Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Allocadia** application integration page, find the **Manage** section and select **single sign-on**.
+1. In the Azure portal, on the **Allocadia** application integration page, find the **Manage** section and select **single sign-on**.
1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Set up single sign-on with SAML** page, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
-
- For test environment - `https://na2standby.allocadia.com`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- For production environment - `https://na2.allocadia.com`
+ a. In the **Identifier** text box, type one of the following URLs:
- b. In the **Reply URL** text box, type a URL using the following pattern:
+ | **Identifier** |
+ |- |
+ | For test environment - `https://na2standby.allocadia.com` |
+ | For production environment - `https://na2.allocadia.com`
- For test environment - `https://na2standby.allocadia.com/allocadia/saml/SSO`
+ b. In the **Reply URL** text box, type one of the following URLs:
- For production environment - `https://na2.allocadia.com/allocadia/saml/SSO`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier and Reply URL. Contact [Allocadia Client support team](mailto:support@allocadia.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ | **Reply URL** |
+ |--|
+ | For test environment - `https://na2standby.allocadia.com/allocadia/saml/SSO` |
+ | For production environment - `https://na2.allocadia.com/allocadia/saml/SSO` |
1. Allocadia application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
Follow these steps to enable Azure AD SSO in the Azure portal.
| firstname | user.givenname | | lastname | user.surname | | email | user.mail |
- | | |
1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
In this section, you'll enable B.Simon to use Azure single sign-on by granting a
1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**. 1. In the applications list, select **Allocadia**. 1. In the app's overview page, find the **Manage** section and select **Users and groups**.-
- ![The "Users and groups" link](common/users-groups-blade.png)
- 1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.-
- ![The Add User link](common/add-assign-user.png)
- 1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen. 1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen. 1. In the **Add Assignment** dialog, click the **Assign** button.
In this section, a user called B.Simon is created in Allocadia. Allocadia suppor
## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Allocadia tile in the Access Panel, you should be automatically signed in to the Allocadia for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
+In this section, you test your Azure AD single sign-on configuration with following options.
-- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)
+* Click on Test this application in Azure portal and you should be automatically signed in to the Allocadia for which you set up the SSO.
-- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)
+* You can use Microsoft My Apps. When you click the Allocadia tile in the My Apps, you should be automatically signed in to the Allocadia for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+## Next steps
-- [Try Allocadia with Azure AD](https://aad.portal.azure.com/)
+Once you configure Allocadia you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Bic Cloud Design Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bic-cloud-design-provisioning-tutorial.md
na-+ Last updated 11/15/2021
active-directory Bullseyetdp Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/bullseyetdp-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/03/2022
active-directory Culture Shift Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/culture-shift-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with Culture Shift'
+description: Learn how to configure single sign-on between Azure Active Directory and Culture Shift.
++++++++ Last updated : 02/24/2022++++
+# Tutorial: Azure AD SSO integration with Culture Shift
+
+In this tutorial, you'll learn how to integrate Culture Shift with Azure Active Directory (Azure AD). When you integrate Culture Shift with Azure AD, you can:
+
+* Control in Azure AD who has access to Culture Shift.
+* Enable your users to be automatically signed-in to Culture Shift with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Culture Shift single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Culture Shift supports **SP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add Culture Shift from the gallery
+
+To configure the integration of Culture Shift into Azure AD, you need to add Culture Shift from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Culture Shift** in the search box.
+1. Select **Culture Shift** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for Culture Shift
+
+Configure and test Azure AD SSO with Culture Shift using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Culture Shift.
+
+To configure and test Azure AD SSO with Culture Shift, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Culture Shift SSO](#configure-culture-shift-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Culture Shift test user](#create-culture-shift-test-user)** - to have a counterpart of B.Simon in Culture Shift that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Culture Shift** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier (Entity ID)** text box, type the value:
+ `urn:amazon:cognito:sp:eu-west-2_tWqrsHU3a`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://auth.reportandsupport.co.uk/saml2/idpresponse`
+
+ c. In the **Sign on URL** text box, type the URL:
+ `https://dashboard.reportandsupport.co.uk/`
+
+1. Culture Shift application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![image](common/default-attributes.png)
+
+1. In addition to above, Culture Shift application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirements.
+
+ | Name | Source Attribute|
+ | -| |
+ | displayname | user.displayname |
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Culture Shift.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Culture Shift**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Culture Shift SSO
+
+To configure single sign-on on **Culture Shift** side, you need to send the **App Federation Metadata Url** to [Culture Shift support team](mailto:tickets@culture-shift.co.uk). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create Culture Shift test user
+
+In this section, you create a user called Britta Simon in Culture Shift. Work with [Culture Shift support team](mailto:tickets@culture-shift.co.uk) to add the users in the Culture Shift platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to Culture Shift Sign-on URL where you can initiate the login flow.
+
+* Go to Culture Shift Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the Culture Shift tile in the My Apps, this will redirect to Culture Shift Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Culture Shift you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Directprint Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/directprint-io-provisioning-tutorial.md
na-+ Last updated 09/24/2021
active-directory Facebook Work Accounts Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/facebook-work-accounts-provisioning-tutorial.md
na-+ Last updated 10/27/2021
active-directory Frankli Io Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/frankli-io-provisioning-tutorial.md
ms.assetid: 936223d1-7ba5-4300-b05b-cbf78ee45d0e
-+ Last updated 12/16/2021
active-directory Gong Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/gong-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/09/2022
active-directory Klaxoon Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-provisioning-tutorial.md
na-+ Last updated 09/22/2021
active-directory Klaxoon Saml Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/klaxoon-saml-provisioning-tutorial.md
na-+ Last updated 09/22/2021
active-directory Lanschool Air Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lanschool-air-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/03/2022
active-directory Meta Networks Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/meta-networks-connector-provisioning-tutorial.md
The scenario outlined in this tutorial assumes that you already have the followi
* [A Meta Networks Connector tenant](https://www.metanetworks.com/) * A user account in Meta Networks Connector with Admin permissions.
+## Step 1. Plan your provisioning deployment
+1. Learn about [how the provisioning service works](../app-provisioning/user-provisioning.md).
+1. Determine who will be in [scope for provisioning](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. Determine what data to [map between Azure AD and Meta Networks Connector](../app-provisioning/customize-application-attributes.md).
++ ## Assigning users to Meta Networks Connector Azure Active Directory uses a concept called *assignments* to determine which users should receive access to selected apps. In the context of automatic user provisioning, only the users and/or groups that have been assigned to an application in Azure AD are synchronized.
Before configuring and enabling automatic user provisioning, you should decide w
* When assigning a user to Meta Networks Connector, you must select any valid application-specific role (if available) in the assignment dialog. Users with the **Default Access** role are excluded from provisioning.
-## Setup Meta Networks Connector for provisioning
+## Step 2. Configure Meta Networks Connector for provisioning
1. Sign in to your [Meta Networks Connector Admin Console](https://login.metanetworks.com/login/) using your organization name. Navigate to **Administration > API Keys**. ![Meta Networks Connector Admin Console](media/meta-networks-connector-provisioning-tutorial/apikey.png)
-2. Click on the plus sign on the upper right side of the screen to create a new **API Key**.
+1. Click on the plus sign on the upper right side of the screen to create a new **API Key**.
![Meta Networks Connector plus icon](media/meta-networks-connector-provisioning-tutorial/plusicon.png)
-3. Set the **API Key Name** and **API Key Description**.
+1. Set the **API Key Name** and **API Key Description**.
:::image type="content" source="media/meta-networks-connector-provisioning-tutorial/keyname.png" alt-text="Screenshot of the Meta Networks Connector Admin Console with highlighted A P I key name and A P I key description values of Azure A D and A P I key." border="false":::
-4. Turn on **Write** privileges for **Groups** and **Users**.
+1. Turn on **Write** privileges for **Groups** and **Users**.
![Meta Networks Connector privileges](media/meta-networks-connector-provisioning-tutorial/privileges.png)
-5. Click on **Add**. Copy the **SECRET** and save it as this will be the only time you can view it. This value will be entered in the Secret Token field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
+1. Click on **Add**. Copy the **SECRET** and save it as this will be the only time you can view it. This value will be entered in the Secret Token field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
:::image type="content" source="media/meta-networks-connector-provisioning-tutorial/token.png" alt-text="Screenshot of a window telling users that the A P I key was added. The Secret box contains an indecipherable value and is highlighted." border="false":::
-6. Add an IdP by navigating to **Administration > Settings > IdP > Create New**.
+1. Add an IdP by navigating to **Administration > Settings > IdP > Create New**.
![Meta Networks Connector Add IdP](media/meta-networks-connector-provisioning-tutorial/newidp.png)
-7. In the **IdP Configuration** page you can **Name** your IdP configuration and choose an **Icon**.
+1. In the **IdP Configuration** page you can **Name** your IdP configuration and choose an **Icon**.
![Meta Networks Connector IdP Name](media/meta-networks-connector-provisioning-tutorial/idpname.png) ![Meta Networks Connector IdP Icon](media/meta-networks-connector-provisioning-tutorial/icon.png)
-8. Under **Configure SCIM** select the API key name created in the previous steps. Click on **Save**.
+1. Under **Configure SCIM** select the API key name created in the previous steps. Click on **Save**.
![Meta Networks Connector configure SCIM](media/meta-networks-connector-provisioning-tutorial/configure.png)
-9. Navigate to **Administration > Settings > IdP tab**. Click on the name of the IdP configuration created in the previous steps to view the **IdP ID**. This **ID** is added to the end of **Tenant URL** while entering the value in **Tenant URL** field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
+1. Navigate to **Administration > Settings > IdP tab**. Click on the name of the IdP configuration created in the previous steps to view the **IdP ID**. This **ID** is added to the end of **Tenant URL** while entering the value in **Tenant URL** field in the Provisioning tab of your Meta Networks Connector application in the Azure portal.
![Meta Networks Connector IdP ID](media/meta-networks-connector-provisioning-tutorial/idpid.png)
-## Add Meta Networks Connector from the gallery
-
-Before configuring Meta Networks Connector for automatic user provisioning with Azure AD, you need to add Meta Networks Connector from the Azure AD application gallery to your list of managed SaaS applications.
-
-**To add Meta Networks Connector from the Azure AD application gallery, perform the following steps:**
+## Step 3. Add Meta Networks Connector from the Azure AD application gallery
-1. In the **[Azure portal](https://portal.azure.com)**, in the left navigation panel, select **Azure Active Directory**.
+Add Meta Networks Connector from the Azure AD application gallery to start managing provisioning to Meta Networks Connector. If you have previously setup Meta Networks Connector for SSO you can use the same application. However it is recommended that you create a separate app when testing out the integration initially. Learn more about adding an application from the gallery [here](../manage-apps/add-application-portal.md).
- ![The Azure Active Directory button](common/select-azuread.png)
-2. Go to **Enterprise applications**, and then select **All applications**.
+
+## Step 4. Define who will be in scope for provisioning
- ![The Enterprise applications blade](common/enterprise-applications.png)
+The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-3. To add a new application, select the **New application** button at the top of the pane.
+* When assigning users and groups to Meta Networks Connector, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add more roles.
- ![The New application button](common/add-new-app.png)
+* Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-4. In the search box, enter **Meta Networks Connector**, select **Meta Networks Connector** in the results panel, and then click the **Add** button to add the application.
- ![Meta Networks Connector in the results list](common/search-new-app.png)
-## Configuring automatic user provisioning to Meta Networks Connector
+## Step 5. Configuring automatic user provisioning to Meta Networks Connector
This section guides you through the steps to configure the Azure AD provisioning service to create, update, and disable users and/or groups in Meta Networks Connector based on user and/or group assignments in Azure AD.
This section guides you through the steps to configure the Azure AD provisioning
![Enterprise applications blade](common/enterprise-applications.png)
-2. In the applications list, select **Meta Networks Connector**.
+1. In the applications list, select **Meta Networks Connector**.
![The Meta Networks Connector link in the Applications list](common/all-applications.png)
-3. Select the **Provisioning** tab.
+1. Select the **Provisioning** tab.
![Screenshot of the Manage options with the Provisioning option called out.](common/provisioning.png)
-4. Set the **Provisioning Mode** to **Automatic**.
+1. Set the **Provisioning Mode** to **Automatic**.
![Screenshot of the Provisioning Mode dropdown list with the Automatic option called out.](common/provisioning-automatic.png)
-5. Under the **Admin Credentials** section, input `https://api.metanetworks.com/v1/scim/<IdP ID>` in **Tenant URL**. Input the **SCIM Authentication Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Meta Networks Connector. If the connection fails, ensure your Meta Networks Connector account has Admin permissions and try again.
+1. Under the **Admin Credentials** section, input `https://api.metanetworks.com/v1/scim/<IdP ID>` in **Tenant URL**. Input the **SCIM Authentication Token** value retrieved earlier in **Secret Token**. Click **Test Connection** to ensure Azure AD can connect to Meta Networks Connector. If the connection fails, ensure your Meta Networks Connector account has Admin permissions and try again.
![Tenant URL + Token](common/provisioning-testconnection-tenanturltoken.png)
-6. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
+1. In the **Notification Email** field, enter the email address of a person or group who should receive the provisioning error notifications and check the checkbox - **Send an email notification when a failure occurs**.
![Notification Email](common/provisioning-notification-email.png)
-7. Click **Save**.
+1. Click **Save**.
-8. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Meta Networks Connector**.
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Users to Meta Networks Connector**.
![Meta Networks Connector User Mappings](media/meta-networks-connector-provisioning-tutorial/usermappings.png)
-9. Review the user attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. Select the **Save** button to commit any changes.
+1. Review the user attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Meta Networks Connector API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- ![Meta Networks Connector User Attributes](media/meta-networks-connector-provisioning-tutorial/userattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Meta Networks Connector|
+ |||||
+ |userName|String|&check;|&check;
+ |name.givenName|String||&check;
+ |name.familyName|String||&check;
+ |active|Boolean||
+ |phonenumbers[type eq "work"].value|String||
-10. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Meta Networks Connector**.
+ > [!NOTE]
+ > phonenumbers[type eq "work"].value should be in E164 format.For example +16175551212
+
+1. Under the **Mappings** section, select **Synchronize Azure Active Directory Groups to Meta Networks Connector**.
![Meta Networks Connector Group Mappings](media/meta-networks-connector-provisioning-tutorial/groupmappings.png)
-11. Review the group attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute Mapping** section. The attributes selected as **Matching** properties are used to match the groups in Meta Networks Connector for update operations. Select the **Save** button to commit any changes.
+1. Review the group attributes that are synchronized from Azure AD to Meta Networks Connector in the **Attribute-Mapping** section. The attributes selected as **Matching** properties are used to match the user accounts in Meta Networks Connector for update operations. If you choose to change the [matching target attribute](../app-provisioning/customize-application-attributes.md), you will need to ensure that the Meta Networks Connector API supports filtering users based on that attribute. Select the **Save** button to commit any changes.
- ![Meta Networks Connector Group Attributes](media/meta-networks-connector-provisioning-tutorial/groupattributes.png)
+ |Attribute|Type|Supported for filtering|Required by Meta Networks Connector|
+ |||||
+ |displayName|String|&check;|&check;
+ |members|Reference||
-12. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+1. To configure scoping filters, refer to the following instructions provided in the [Scoping filter tutorial](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-13. To enable the Azure AD provisioning service for Meta Networks Connector, change the **Provisioning Status** to **On** in the **Settings** section.
+1. To enable the Azure AD provisioning service for Meta Networks Connector, change the **Provisioning Status** to **On** in the **Settings** section.
![Provisioning Status Toggled On](common/provisioning-toggle-on.png)
-14. Define the users and/or groups that you would like to provision to Meta Networks Connector by choosing the desired values in **Scope** in the **Settings** section.
+1. Define the users and/or groups that you would like to provision to Meta Networks Connector by choosing the desired values in **Scope** in the **Settings** section.
![Provisioning Scope](common/provisioning-scope.png)
-15. When you are ready to provision, click **Save**.
+1. When you are ready to provision, click **Save**.
![Saving Provisioning Configuration](common/provisioning-configuration-save.png) This operation starts the initial synchronization of all users and/or groups defined in **Scope** in the **Settings** section. The initial sync takes longer to perform than subsequent syncs, which occur approximately every 40 minutes as long as the Azure AD provisioning service is running. You can use the **Synchronization Details** section to monitor progress and follow links to provisioning activity report, which describes all actions performed by the Azure AD provisioning service on Meta Networks Connector.
-For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](../app-provisioning/check-status-user-account-provisioning.md).
+## Step 6. Monitor your deployment
+Once you've configured provisioning, use the following resources to monitor your deployment:
+
+* Use the [provisioning logs](../reports-monitoring/concept-provisioning-logs.md) to determine which users have been provisioned successfully or unsuccessfully
+* Check the [progress bar](../app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md) to see the status of the provisioning cycle and how close it is to completion
+* If the provisioning configuration seems to be in an unhealthy state, the application will go into quarantine. Learn more about quarantine states [here](../app-provisioning/application-provisioning-quarantine-status.md).
+
-## Additional resources
+## More resources
* [Managing user account provisioning for Enterprise Apps](../app-provisioning/configure-automatic-user-provisioning-portal.md) * [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
active-directory Mx3 Diagnostics Connector Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/mx3-diagnostics-connector-provisioning-tutorial.md
na-+ Last updated 10/12/2021
active-directory Netpresenter Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/netpresenter-provisioning-tutorial.md
na-+ Last updated 10/04/2021
active-directory Openlearning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/openlearning-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with OpenLearning'
+description: Learn how to configure single sign-on between Azure Active Directory and OpenLearning.
++++++++ Last updated : 02/17/2022++++
+# Tutorial: Azure AD SSO integration with OpenLearning
+
+In this tutorial, you'll learn how to integrate OpenLearning with Azure Active Directory (Azure AD). When you integrate OpenLearning with Azure AD, you can:
+
+* Control in Azure AD who has access to OpenLearning.
+* Enable your users to be automatically signed-in to OpenLearning with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* OpenLearning single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* OpenLearning supports **SP** initiated SSO.
+
+## Add OpenLearning from the gallery
+
+To configure the integration of OpenLearning into Azure AD, you need to add OpenLearning from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **OpenLearning** in the search box.
+1. Select **OpenLearning** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for OpenLearning
+
+Configure and test Azure AD SSO with OpenLearning using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in OpenLearning.
+
+To configure and test Azure AD SSO with OpenLearning, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure OpenLearning SSO](#configure-openlearning-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create OpenLearning test user](#create-openlearning-test-user)** - to have a counterpart of B.Simon in OpenLearning that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **OpenLearning** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, if you have **Service Provider metadata file**, perform the following steps:
+
+ a. Click **Upload metadata file**.
+
+ ![Upload metadata file](common/upload-metadata.png)
+
+ b. Click on **folder logo** to select the metadata file and click **Upload**.
+
+ ![choose metadata file](common/browse-upload-metadata.png)
+
+ c. After the metadata file is successfully uploaded, the **Identifier** value gets auto populated in Basic SAML Configuration section.
+
+ d. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://www.openlearning.com/saml-redirect/<institution_id>/<idp_name>/`
+
+ > [!Note]
+ > If the **Identifier** value does not get auto populated, then please fill in the value manually according to your requirement. The Sign-on URL value is not real. Update this value with the actual Sign-on URL. Contact [OpenLearning Client support team](mailto:dev@openlearning.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Base64)** and select **Download** to download the certificate and save it on your computer.
+
+ ![The Certificate download link](common/certificatebase64.png)
+
+1. On the **Set up OpenLearning** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Copy configuration URLs](common/copy-configuration-urls.png)
+
+1. OpenLearning application expects to enable token encryption in order to make SSO work. To activate token encryption, go to the **Azure Active Directory** > **Enterprise applications** and select **Token encryption**. For more information, please refer this [link](../manage-apps/howto-saml-token-encryption.md).
+
+ ![Screenshot shows the activation of Token Encryption.](./media/openlearning-tutorial/token.png "Token Encryption")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to OpenLearning.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **OpenLearning**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure OpenLearning SSO
+
+1. Log in to your OpenLearning company site as an administrator.
+
+1. Go to **SETTINGS** > **Integrations** and click **ADD** under SAML Identity Provider(IDP) Configuration.
+
+1. In the **SAML Identity Provider** page, perform the following steps:
+
+ ![Screenshot shows SAML settings](./media/openlearning-tutorial/certificate.png "SAML settings")
+
+ 1. In the **Name (required)** textbox, type a short configuration name.
+
+ 1. Copy **Reply(ACS) Url** value, paste this value into the **Reply URL** text box in the **Basic SAML Configuration** section in the Azure portal.
+
+ 1. In the **Entity ID/Issuer URL (required)** textbox, paste the **Azure AD Identifier** value which you have copied from the Azure portal.
+
+ 1. In the **Sign-In URL (required)** textbox, paste the **Login URL** value which you have copied from the Azure portal.
+
+ 1. Open the downloaded **Certificate (Base64)** from the Azure portal into Notepad and paste the content into the **Certificate (required)** textbox.
+
+ 1. Download the **Metadata XML** into Notepad and upload the file into **Basic SAML Configuration** section in the Azure portal.
+
+ 1. Click **Save**.
+
+### Create OpenLearning test user
+
+1. In a different web browser window, log in to your OpenLearning website as an administrator.
+
+1. Navigate to **People** and select **Invite People**.
+
+1. Enter the valid **Email Addresses** in the textbox and click **INVITE ALL USERS**.
+
+ ![Screenshot shows inviting all users](./media/openlearning-tutorial/users.png "SAML USERS")
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+* Click on **Test this application** in Azure portal. This will redirect to OpenLearning Sign-on URL where you can initiate the login flow.
+
+* Go to OpenLearning Sign-on URL directly and initiate the login flow from there.
+
+* You can use Microsoft My Apps. When you click the OpenLearning tile in the My Apps, this will redirect to OpenLearning Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure OpenLearning you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Oracle Cloud Infrastructure Console Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/oracle-cloud-infrastructure-console-provisioning-tutorial.md
Add Oracle Cloud Infrastructure Console from the Azure AD application gallery to
The Azure AD provisioning service allows you to scope who will be provisioned based on assignment to the application and or based on attributes of the user / group. If you choose to scope who will be provisioned to your app based on assignment, you can use the following [steps](../manage-apps/assign-user-or-group-access-portal.md) to assign users and groups to the application. If you choose to scope who will be provisioned based solely on attributes of the user or group, you can use a scoping filter as described [here](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
-* When assigning users and groups to Oracle Cloud Infrastructure Console, you must select a role other than **Default Access**. Users with the Default Access role are excluded from provisioning and will be marked as not effectively entitled in the provisioning logs. If the only role available on the application is the default access role, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add additional roles.
- * Start small. Test with a small set of users and groups before rolling out to everyone. When scope for provisioning is set to assigned users and groups, you can control this by assigning one or two users or groups to the app. When scope is set to all users and groups, you can specify an [attribute based scoping filter](../app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md).
+* If you need additional roles, you can [update the application manifest](../develop/howto-add-app-roles-in-azure-ad-apps.md) to add new roles.
+ ## Step 5. Configure automatic user provisioning to Oracle Cloud Infrastructure Console
active-directory Prodpad Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/prodpad-provisioning-tutorial.md
ms.devlang: na-+ Last updated 02/09/2022
active-directory Reviewsnap Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/reviewsnap-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Reviewsnap | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Reviewsnap'
description: Learn how to configure single sign-on between Azure Active Directory and Reviewsnap.
Previously updated : 03/26/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Reviewsnap
+# Tutorial: Azure AD SSO integration with Reviewsnap
-In this tutorial, you learn how to integrate Reviewsnap with Azure Active Directory (Azure AD).
-Integrating Reviewsnap with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Reviewsnap with Azure Active Directory (Azure AD). When you integrate Reviewsnap with Azure AD, you can:
-* You can control in Azure AD who has access to Reviewsnap.
-* You can enable your users to be automatically signed-in to Reviewsnap (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Reviewsnap.
+* Enable your users to be automatically signed-in to Reviewsnap with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Reviewsnap, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Reviewsnap single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Reviewsnap single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Reviewsnap supports **SP and IDP** initiated SSO
-
-## Adding Reviewsnap from the gallery
-
-To configure the integration of Reviewsnap into Azure AD, you need to add Reviewsnap from the gallery to your list of managed SaaS apps.
-
-**To add Reviewsnap from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Reviewsnap supports **SP and IDP** initiated SSO.
-4. In the search box, type **Reviewsnap**, select **Reviewsnap** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Reviewsnap in the results list](common/search-new-app.png)
+## Add Reviewsnap from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Reviewsnap based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Reviewsnap needs to be established.
-
-To configure and test Azure AD single sign-on with Reviewsnap, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Reviewsnap Single Sign-On](#configure-reviewsnap-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Reviewsnap test user](#create-reviewsnap-test-user)** - to have a counterpart of Britta Simon in Reviewsnap that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Reviewsnap into Azure AD, you need to add Reviewsnap from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Reviewsnap** in the search box.
+1. Select **Reviewsnap** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Reviewsnap
-To configure Azure AD single sign-on with Reviewsnap, perform the following steps:
+Configure and test Azure AD SSO with Reviewsnap using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Reviewsnap.
-1. In the [Azure portal](https://portal.azure.com/), on the **Reviewsnap** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Reviewsnap, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Reviewsnap SSO](#configure-reviewsnap-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Reviewsnap test user](#create-reviewsnap-test-user)** - to have a counterpart of B.Simon in Reviewsnap that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Reviewsnap** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
-
- a. In the **Identifier** text box, type a URL:
+ a. In the **Identifier** text box, type the URL:
`https://app.reviewsnap.com` b. In the **Reply URL** text box, type a URL using the following pattern:
To configure Azure AD single sign-on with Reviewsnap, perform the following step
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
-
- In the **Sign-on URL** text box, type a URL:
+ In the **Sign-on URL** text box, type the URL:
`https://app.reviewsnap.com/login` > [!NOTE]
To configure Azure AD single sign-on with Reviewsnap, perform the following step
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Reviewsnap Single Sign-On
-
-To configure single sign-on on **Reviewsnap** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Reviewsnap support team](mailto:support@reviewsnap.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Reviewsnap.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Reviewsnap.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Reviewsnap**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Reviewsnap**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Reviewsnap SSO
-2. In the applications list, select **Reviewsnap**.
-
- ![The Reviewsnap link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
+To configure single sign-on on **Reviewsnap** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [Reviewsnap support team](mailto:support@reviewsnap.com). They set this setting to have the SAML SSO connection set properly on both sides.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Reviewsnap test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Reviewsnap. Work with [Reviewsnap support team](mailto:support@reviewsnap.com) to add the users in the Reviewsnap platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Reviewsnap test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Reviewsnap. Work with [Reviewsnap support team](mailto:support@reviewsnap.com) to add the users in the Reviewsnap platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Reviewsnap Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Reviewsnap Sign-on URL directly and initiate the login flow from there.
-When you click the Reviewsnap tile in the Access Panel, you should be automatically signed in to the Reviewsnap for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional Resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Reviewsnap for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Reviewsnap tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Reviewsnap for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Reviewsnap you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Rolepoint Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/rolepoint-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with RolePoint | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with RolePoint'
description: In this tutorial, you'll learn how to configure single sign-on between Azure Active Directory and RolePoint.
Previously updated : 03/15/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with RolePoint
+# Tutorial: Azure AD SSO integration with RolePoint
-In this tutorial, you'll learn how to integrate RolePoint with Azure Active Directory (Azure AD).
-This integration provides these benefits:
+In this tutorial, you'll learn how to integrate RolePoint with Azure Active Directory (Azure AD). When you integrate RolePoint with Azure AD, you can:
-* You can use Azure AD to control who has access to RolePoint.
-* You can enable your users to be automatically signed in to RolePoint (single sign-on) with their Azure AD accounts.
-* You can manage your accounts in one central location: the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [Single sign-on to applications in Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to RolePoint.
+* Enable your users to be automatically signed-in to RolePoint with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
In this tutorial, you'll configure and test Azure AD single sign-on in a test en
## Add RolePoint from the gallery
-To set up the integration of RolePoint into Azure AD, you need to add RolePoint from the gallery to your list of managed SaaS apps.
-
-1. In the [Azure portal](https://portal.azure.com), in the left pane, select **Azure Active Directory**:
-
- ![Select Azure Active Directory](common/select-azuread.png)
-
-2. Go to **Enterprise applications** > **All applications**:
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add an application, select **New application** at the top of the window:
-
- ![Select New application](common/add-new-app.png)
-
-4. In the search box, enter **RolePoint**. Select **RolePoint** in the search results and then select **Add**.
-
- ![Search results](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
+To configure the integration of RolePoint into Azure AD, you need to add RolePoint from the gallery to your list of managed SaaS apps.
-In this section, you'll configure and test Azure AD single sign-on with RolePoint by using a test user named Britta Simon.
-To enable single sign-on, you need to establish a relationship between an Azure AD user and the corresponding user in RolePoint.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **RolePoint** in the search box.
+1. Select **RolePoint** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with RolePoint, you need to complete these steps:
+## Configure and test Azure AD SSO for RolePoint
-1. **[Configure Azure AD single sign-on](#configure-azure-ad-single-sign-on)** to enable the feature for your users.
-2. **[Configure RolePoint single sign-on](#configure-rolepoint-single-sign-on)** on the application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** to test Azure AD single sign-on.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** to enable Azure AD single sign-on for the user.
-5. **[Create a RolePoint test user](#create-a-rolepoint-test-user)** that's linked to the Azure AD representation of the user.
-6. **[Test single sign-on](#test-single-sign-on)** to verify that the configuration works.
+Configure and test Azure AD SSO with RolePoint using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in RolePoint.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with RolePoint, perform the following steps:
-In this section, you'll enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure RolePoint SSO](#configure-rolepoint-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create RolePoint test user](#create-rolepoint-test-user)** - to have a counterpart of B.Simon in RolePoint that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with RolePoint, take these steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the RolePoint application integration page, select **Single sign-on**:
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Select single sign-on](common/select-sso.png)
+1. In the Azure portal, on the **RolePoint** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. In the **Select a single sign-on method** dialog box, select **SAML/WS-Fed** mode to enable single sign-on:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Select a single sign-on method](common/select-saml-option.png)
+4. In the **Basic SAML Configuration** dialog box, perform the following steps:
-3. On the **Set up Single Sign-On with SAML** page, select the **Edit** icon to open the **Basic SAML Configuration** dialog box:
+ 1. In the **Identifier (Entity ID)** box, type a URL using the following pattern:
- ![Edit icon](common/edit-urls.png)
-
-4. In the **Basic SAML Configuration** dialog box, take the following steps.
-
- ![Basic SAML Configuration dialog box](common/sp-identifier.png)
-
- 1. In the **Sign on URL** box, enter a URL in this pattern:
-
- `https://<subdomain>.rolepoint.com/login`
+ `https://app.rolepoint.com/<instancename>`
- 1. In the **Identifier (Entity ID)** box, enter a URL in this pattern:
+ 1. In the **Sign on URL** box, type a URL using the following pattern:
- `https://app.rolepoint.com/<instancename>`
+ `https://<subdomain>.rolepoint.com/login`
> [!NOTE]
- > These values are placeholders. You need to use the actual sign-on URL and identifier. We suggest that you use a unique string value in the identifier. Contact the [RolePoint support team](mailto:info@rolepoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
+ > These values are placeholders. You need to use the actual Identifier and Sign on URL. We suggest that you use a unique string value in the identifier. Contact the [RolePoint support team](mailto:info@rolepoint.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** dialog box in the Azure portal.
5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, select the **Download** link next to **Federation Metadata XML**, per your requirements, and save the file on your computer.
To configure Azure AD single sign-on with RolePoint, take these steps:
![Copy the configuration URLs](common/copy-configuration-urls.png)
- 1. **Login URL**.
-
- 1. **Azure AD Identifier**.
-
- 1. **Logout URL**.
--
-### Configure RolePoint single sign-on
-
-To set up single sign-on on the RolePoint side, you need to work with the [RolePoint support team](mailto:info@rolepoint.com). Send this team the Federation Metadata XML file and the URLs that you got from the Azure portal. They'll configure RolePoint to ensure the SAML SSO connection is set properly on both sides.
- ### Create an Azure AD test user
-In this section, you'll create a test user named Britta Simon in the Azure portal.
-
-1. In the Azure portal, select **Azure Active Directory** in the left pane, select **Users**, and then select **All users**:
-
- ![Select All users](common/users.png)
-
-2. Select **New user** at the top of the window:
-
- ![Select New user](common/new-user.png)
-
-3. In the **User** dialog box, take the following steps.
-
- ![User dialog box](common/user-properties.png)
-
- 1. In the **Name** box, enter **BrittaSimon**.
-
- 1. In the **User name** box, enter **BrittaSimon@\<yourcompanydomain>.\<extension>**. (For example, BrittaSimon@contoso.com.)
+In this section, you'll create a test user in the Azure portal called B.Simon.
- 1. Select **Show Password**, and then write down the value that's in the **Password** box.
-
- 1. Select **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you'll enable Britta Simon to use Azure single sign-on by granting her access to RolePoint.
-
-1. In the Azure portal, select **Enterprise applications**, select **All applications**, and then select **RolePoint**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the list of applications, select **RolePoint**.
-
- ![List of applications](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to RolePoint.
-3. In the left pane, select **Users and groups**:
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **RolePoint**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Select Users and groups](common/users-groups-blade.png)
+## Configure RolePoint SSO
-4. Select **Add user**, and then select **Users and groups** in the **Add Assignment** dialog box.
-
- ![Select Add user](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog box, select **Britta Simon** in the users list, and then click the **Select** button at the bottom of the window.
-
-6. If you expect a role value in the SAML assertion, in the **Select Role** dialog box, select the appropriate role for the user from the list. Click the **Select** button at the bottom of the window.
-
-7. In the **Add Assignment** dialog box, select **Assign**.
+To set up single sign-on on the RolePoint side, you need to work with the [RolePoint support team](mailto:info@rolepoint.com). Send this team the Federation Metadata XML file and the URLs that you got from the Azure portal. They'll configure RolePoint to ensure the SAML SSO connection is set properly on both sides.
-### Create a RolePoint test user
+### Create RolePoint test user
Next, you need to create a user named Britta Simon in RolePoint. Work with the [RolePoint support team](mailto:info@rolepoint.com) to add users to RolePoint. Users need to be created and activated before you can use single sign-on.
-### Test single sign-on
+## Test SSO
-Now you need to test your Azure AD single sign-on configuration by using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you select the RolePoint tile in the Access Panel, you should be automatically signed in to the RolePoint instance for which you set up SSO. For more information about the Access Panel, see [Access and use apps on the My Apps portal](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to RolePoint Sign-on URL where you can initiate the login flow.
-## Additional resources
+* Go to RolePoint Sign-on URL directly and initiate the login flow from there.
-- [Tutorials for integrating SaaS applications with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the RolePoint tile in the My Apps, this will redirect to RolePoint Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure RolePoint you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Shucchonavi Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/shucchonavi-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Shuccho Navi | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Shuccho Navi'
description: Learn how to configure single sign-on between Azure Active Directory and Shuccho Navi.
Previously updated : 03/07/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Shuccho Navi
+# Tutorial: Azure AD SSO integration with Shuccho Navi
-In this tutorial, you learn how to integrate Shuccho Navi with Azure Active Directory (Azure AD).
-Integrating Shuccho Navi with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Shuccho Navi with Azure Active Directory (Azure AD). When you integrate Shuccho Navi with Azure AD, you can:
-* You can control in Azure AD who has access to Shuccho Navi.
-* You can enable your users to be automatically signed-in to Shuccho Navi (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Shuccho Navi.
+* Enable your users to be automatically signed-in to Shuccho Navi with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites
-To configure Azure AD integration with Shuccho Navi, you need the following items:
+To get started, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* Shuccho Navi single sign-on enabled subscription
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Shuccho Navi single sign-on (SSO) enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Shuccho Navi supports **SP** initiated SSO
-
-## Adding Shuccho Navi from the gallery
-
-To configure the integration of Shuccho Navi into Azure AD, you need to add Shuccho Navi from the gallery to your list of managed SaaS apps.
-
-**To add Shuccho Navi from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* Shuccho Navi supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
-4. In the search box, type **Shuccho Navi**, select **Shuccho Navi** from result panel then click **Add** button to add the application.
+## Add Shuccho Navi from the gallery
- ![Shuccho Navi in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Shuccho Navi based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Shuccho Navi needs to be established.
-
-To configure and test Azure AD single sign-on with Shuccho Navi, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Shuccho Navi Single Sign-On](#configure-shuccho-navi-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Shuccho Navi test user](#create-shuccho-navi-test-user)** - to have a counterpart of Britta Simon in Shuccho Navi that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
-
-### Configure Azure AD single sign-on
+To configure the integration of Shuccho Navi into Azure AD, you need to add Shuccho Navi from the gallery to your list of managed SaaS apps.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Shuccho Navi** in the search box.
+1. Select **Shuccho Navi** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure Azure AD single sign-on with Shuccho Navi, perform the following steps:
+## Configure and test Azure AD SSO for Shuccho Navi
-1. In the [Azure portal](https://portal.azure.com/), on the **Shuccho Navi** application integration page, select **Single sign-on**.
+Configure and test Azure AD SSO with Shuccho Navi using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Shuccho Navi.
- ![Configure single sign-on link](common/select-sso.png)
+To configure and test Azure AD SSO with Shuccho Navi, perform the following steps:
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Shuccho Navi SSO](#configure-shuccho-navi-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Shuccho Navi test user](#create-shuccho-navi-test-user)** - to have a counterpart of B.Simon in Shuccho Navi that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
- ![Single sign-on select mode](common/select-saml-option.png)
+## Configure Azure AD SSO
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+1. In the Azure portal, on the **Shuccho Navi** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-4. On the **Basic SAML Configuration** section, perform the following steps:
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
- ![Shuccho Navi Domain and URLs single sign-on information](common/sp-signonurl.png)
+4. On the **Basic SAML Configuration** section, perform the following step:
In the **Sign-on URL** text box, type a URL using the following pattern: `https://naviauth.nta.co.jp/saml/login?ENTP_CD=<Your company code>`
To configure Azure AD single sign-on with Shuccho Navi, perform the following st
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Shuccho Navi Single Sign-On
-
-To configure single sign-on on **Shuccho Navi** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type **brittasimon\@yourcompanydomain.extension**
- For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Shuccho Navi.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Shuccho Navi.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Shuccho Navi**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Shuccho Navi**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Shuccho Navi SSO
-2. In the applications list, select **Shuccho Navi**.
-
- ![The Shuccho Navi link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog, select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog, click the **Assign** button.
+To configure single sign-on on **Shuccho Navi** side, you need to send the downloaded **Metadata XML** and appropriate copied URLs from Azure portal to [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Shuccho Navi test user In this section, you create a user called Britta Simon in Shuccho Navi. Work with [Shuccho Navi support team](mailto:sys_ntabtm@nta.co.jp) to add the users in the Shuccho Navi platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Shuccho Navi tile in the Access Panel, you should be automatically signed in to the Shuccho Navi for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Shuccho Navi Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Shuccho Navi Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Shuccho Navi tile in the My Apps, this will redirect to Shuccho Navi Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Shuccho Navi you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Soonr Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/soonr-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Soonr Workplace | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Soonr Workplace'
description: Learn how to configure single sign-on between Azure Active Directory and Soonr Workplace.
Previously updated : 04/08/2019 Last updated : 02/28/2022
-# Tutorial: Azure Active Directory integration with Soonr Workplace
+# Tutorial: Azure AD SSO integration with Soonr Workplace
-In this tutorial, you learn how to integrate Soonr Workplace with Azure Active Directory (Azure AD).
-Integrating Soonr Workplace with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Soonr Workplace with Azure Active Directory (Azure AD). When you integrate Soonr Workplace with Azure AD, you can:
-* You can control in Azure AD who has access to Soonr Workplace.
-* You can enable your users to be automatically signed-in to Soonr Workplace (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Soonr Workplace.
+* Enable your users to be automatically signed-in to Soonr Workplace with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Soonr Workplace, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Soonr Workplace single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Soonr Workplace single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Soonr Workplace supports **SP and IDP** initiated SSO
+* Soonr Workplace supports **SP and IDP** initiated SSO.
-## Adding Soonr Workplace from the gallery
+## Add Soonr Workplace from the gallery
To configure the integration of Soonr Workplace into Azure AD, you need to add Soonr Workplace from the gallery to your list of managed SaaS apps.
-**To add Soonr Workplace from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click the **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add a new application, click the **New application** button at the top of the dialog.
-
- ![The New application button](common/add-new-app.png)
-
-4. In the search box, type **Soonr Workplace**, select **Soonr Workplace** from the result panel then click the **Add** button to add the application.
-
- ![Soonr Workplace in the results list](common/search-new-app.png)
-
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Soonr Workplace based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Soonr Workplace needs to be established.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Soonr Workplace** in the search box.
+1. Select **Soonr Workplace** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-To configure and test Azure AD single sign-on with Soonr Workplace, you need to complete the following building blocks:
+## Configure and test Azure AD SSO for Soonr Workplace
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Soonr Workplace Single Sign-On](#configure-soonr-workplace-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Soonr Workplace test user](#create-soonr-workplace-test-user)** - to have a counterpart of Britta Simon in Soonr Workplace that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+Configure and test Azure AD SSO with Soonr Workplace using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Soonr Workplace.
-### Configure Azure AD single sign-on
+To configure and test Azure AD SSO with Soonr Workplace, perform the following steps:
-In this section, you enable Azure AD single sign-on in the Azure portal.
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Soonr Workplace SSO](#configure-soonr-workplace-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Soonr Workplace test user](#create-soonr-workplace-test-user)** - to have a counterpart of B.Simon in Soonr Workplace that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-To configure Azure AD single sign-on with Soonr Workplace, perform the following steps:
+## Configure Azure AD SSO
-1. In the [Azure portal](https://portal.azure.com/), on the **Soonr Workplace** application integration page, select **Single sign-on**.
+Follow these steps to enable Azure AD SSO in the Azure portal.
- ![Configure single sign-on link](common/select-sso.png)
+1. In the Azure portal, on the **Soonr Workplace** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
-
- ![Single sign-on select mode](common/select-saml-option.png)
-
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, perform the following steps:
- ![Screenshot shows the Basic SAML Configuration, where you can enter Identifier, Reply U R L, and select Save.](common/idp-intiated.png)
- a. In the **Identifier** text box, type a URL using the following pattern: `https://<servername>.soonr.com/singlesignon/saml/metadata`
To configure Azure AD single sign-on with Soonr Workplace, perform the following
5. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
- ![Screenshot shows Set additional U R Ls where you can enter a Sign on U R L.](common/metadata-upload-additional-signon.png)
- In the **Sign-on URL** text box, type a URL using the following pattern: `https://<servername>.soonr.com/singlesignon/saml/SSO`
To configure Azure AD single sign-on with Soonr Workplace, perform the following
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Soonr Workplace Single Sign-On
-
-To configure single sign-on on **Soonr Workplace** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Soonr Workplace support team](https://awp.autotask.net/help/). They set this setting to have the SAML SSO connection set properly on both sides.
-
-> [!Note]
-> If you require assistance with configuring Autotask Workplace, please see [this page](https://awp.autotask.net/help/Content/0_HOME/Support_for_End_Clients.htm) to get assistance with your Workplace account.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type `brittasimon@yourcompanydomain.extension`. For example, BrittaSimon@contoso.com
-
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
+In this section, you'll create a test user in the Azure portal called B.Simon.
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Soonr Workplace.
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Soonr Workplace.
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Soonr Workplace**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Soonr Workplace**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![Enterprise applications blade](common/enterprise-applications.png)
+## Configure Soonr Workplace SSO
-2. In the applications list, select **Soonr Workplace**.
-
- ![The Soonr Workplace link in the Applications list](common/all-applications.png)
-
-3. In the menu on the left, select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
+To configure single sign-on on **Soonr Workplace** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Soonr Workplace support team](https://awp.autotask.net/help/). They set this setting to have the SAML SSO connection set properly on both sides.
- ![The Add Assignment pane](common/add-assign-user.png)
+> [!Note]
+> If you require assistance with configuring Autotask Workplace, please see [this page](https://awp.autotask.net/help/Content/0_HOME/Support_for_End_Clients.htm) to get assistance with your Workplace account.
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
+### Create Soonr Workplace test user
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
+In this section, you create a user called Britta Simon in Soonr Workplace. Work with [Soonr Workplace support team](https://awp.autotask.net/help/) to add the users in the Soonr Workplace platform. Users must be created and activated before you use single sign-on.
-7. In the **Add Assignment** dialog click the **Assign** button.
+## Test SSO
-### Create Soonr Workplace test user
+In this section, you test your Azure AD single sign-on configuration with following options.
-In this section, you create a user called Britta Simon in Soonr Workplace. Work with [Soonr Workplace support team](https://awp.autotask.net/help/) to add the users in the Soonr Workplace platform. Users must be created and activated before you use single sign-on.
+#### SP initiated:
-### Test single sign-on
+* Click on **Test this application** in Azure portal. This will redirect to Soonr Workplace Sign on URL where you can initiate the login flow.
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+* Go to Soonr Workplace Sign-on URL directly and initiate the login flow from there.
-When you click the Soonr Workplace tile in the Access Panel, you should be automatically signed in to the Soonr Workplace for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+#### IDP initiated:
-## Additional resources
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Soonr Workplace for which you set up the SSO.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+You can also use Microsoft My Apps to test the application in any mode. When you click the Soonr Workplace tile in the My Apps, if configured in SP mode you would be redirected to the application sign on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Soonr Workplace for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Soonr Workplace you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Terratrue Provisioning Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/terratrue-provisioning-tutorial.md
ms.devlang: na-+ Last updated 12/16/2021
active-directory Tonicdm Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/tonicdm-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with TonicDM | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with TonicDM'
description: Learn how to configure single sign-on between Azure Active Directory and TonicDM.
Previously updated : 03/28/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory integration with TonicDM
+# Tutorial: Azure AD SSO integration with TonicDM
-In this tutorial, you learn how to integrate TonicDM with Azure Active Directory (Azure AD).
-Integrating TonicDM with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate TonicDM with Azure Active Directory (Azure AD). When you integrate TonicDM with Azure AD, you can:
-* You can control in Azure AD who has access to TonicDM.
-* You can enable your users to be automatically signed-in to TonicDM (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to TonicDM.
+* Enable your users to be automatically signed-in to TonicDM with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with TonicDM, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get one-month trial [here](https://azure.microsoft.com/pricing/free-trial/)
-* TonicDM single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* TonicDM single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* TonicDM supports **SP** initiated SSO
-
-* TonicDM supports **Just In Time** user provisioning
-
-## Adding TonicDM from the gallery
-
-To configure the integration of TonicDM into Azure AD, you need to add TonicDM from the gallery to your list of managed SaaS apps.
-
-**To add TonicDM from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
+* TonicDM supports **SP** initiated SSO.
- ![The New application button](common/add-new-app.png)
+* TonicDM supports **Just In Time** user provisioning.
-4. In the search box, type **TonicDM**, select **TonicDM** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![TonicDM in the results list](common/search-new-app.png)
+## Add TonicDM from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with TonicDM based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in TonicDM needs to be established.
-
-To configure and test Azure AD single sign-on with TonicDM, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure TonicDM Single Sign-On](#configure-tonicdm-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create TonicDM test user](#create-tonicdm-test-user)** - to have a counterpart of Britta Simon in TonicDM that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of TonicDM into Azure AD, you need to add TonicDM from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **TonicDM** in the search box.
+1. Select **TonicDM** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for TonicDM
-To configure Azure AD single sign-on with TonicDM, perform the following steps:
+Configure and test Azure AD SSO with TonicDM using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in TonicDM.
-1. In the [Azure portal](https://portal.azure.com/), on the **TonicDM** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with TonicDM, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure TonicDM SSO](#configure-tonicdm-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create TonicDM test user](#create-tonicdm-test-user)** - to have a counterpart of B.Simon in TonicDM that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **TonicDM** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![TonicDM Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://tonicdm.com/saml/metadata`
- a. In the **Sign on URL** text box, type a URL:
+ b. In the **Sign on URL** text box, type the URL:
`https://tonicdm.com/`
- b. In the **Identifier (Entity ID)** text box, type a URL:
- `https://tonicdm.com/saml/metadata`
- 5. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, click **Download** to download the **Certificate (Base64)** from the given options as per your requirement and save it on your computer. ![The Certificate download link](common/certificatebase64.png)
To configure Azure AD single sign-on with TonicDM, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure TonicDM Single Sign-On
-
-To configure single sign-on on **TonicDM** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TonicDM support team](mailto:support@tonicdm.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to TonicDM.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **TonicDM**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **TonicDM**.
-
- ![The TonicDM link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to TonicDM.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **TonicDM**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure TonicDM SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **TonicDM** side, you need to send the downloaded **Certificate (Base64)** and appropriate copied URLs from Azure portal to [TonicDM support team](mailto:support@tonicdm.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create TonicDM test user In this section, you create a user called Britta Simon in TonicDM. Work with [TonicDM support team](mailto:support@tonicdm.com) to add the users in the TonicDM platform. Users must be created and activated before you use single sign-on
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the TonicDM tile in the Access Panel, you should be automatically signed in to the TonicDM for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to TonicDM Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to TonicDM Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the TonicDM tile in the My Apps, this will redirect to TonicDM Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure TonicDM you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Us Bank Prepaid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/us-bank-prepaid-tutorial.md
+
+ Title: 'Tutorial: Azure AD SSO integration with U.S. Bank Prepaid'
+description: Learn how to configure single sign-on between Azure Active Directory and U.S. Bank Prepaid.
++++++++ Last updated : 03/03/2022++++
+# Tutorial: Azure AD SSO integration with U.S. Bank Prepaid
+
+In this tutorial, you'll learn how to integrate U.S. Bank Prepaid with Azure Active Directory (Azure AD). When you integrate U.S. Bank Prepaid with Azure AD, you can:
+
+* Control in Azure AD who has access to U.S. Bank Prepaid.
+* Enable your users to be automatically signed-in to U.S. Bank Prepaid with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* U.S. Bank Prepaid single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* U.S. Bank Prepaid supports **SP and IDP** initiated SSO.
+
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
+
+## Add U.S. Bank Prepaid from the gallery
+
+To configure the integration of U.S. Bank Prepaid into Azure AD, you need to add U.S. Bank Prepaid from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **U.S. Bank Prepaid** in the search box.
+1. Select **U.S. Bank Prepaid** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+## Configure and test Azure AD SSO for U.S. Bank Prepaid
+
+Configure and test Azure AD SSO with U.S. Bank Prepaid using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in U.S. Bank Prepaid.
+
+To configure and test Azure AD SSO with U.S. Bank Prepaid, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure U.S. Bank Prepaid SSO](#configure-us-bank-prepaid-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create U.S. Bank Prepaid test user](#create-us-bank-prepaid-test-user)** - to have a counterpart of B.Simon in U.S. Bank Prepaid that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **U.S. Bank Prepaid** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
+
+1. On the **Basic SAML Configuration** section, the user does not have to perform any step as the app is already pre-integrated with Azure.
+
+1. On the **Basic SAML Configuration** section, if you wish to configure the application in **SP** initiated mode then perform the following steps:
+
+ a. In the **Identifier** text box, type the value:
+ `USBank:SAML2.0:Prepaid_SP`
+
+ b. In the **Reply URL** text box, type the URL:
+ `https://uat-federation.usbank.com/sp/ACS.saml2`
+
+ c. In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<Environment>.usbank.com/sp/startSSO.ping?PartnerIdpId=<ID>`
+
+ > [!NOTE]
+ > The value is not real. Update this value with the actual Sign-on URL. Contact [U.S. Bank Prepaid Client support team](mailto:web.access.management@usbank.com) to get this value. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. On the **Set up single sign-on with SAML** page, In the **SAML Signing Certificate** section, click copy button to copy **App Federation Metadata Url** and save it on your computer.
+
+ ![The Certificate download link](common/copy-metadataurl.png)
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to U.S. Bank Prepaid.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **U.S. Bank Prepaid**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure U.S. Bank Prepaid SSO
+
+To configure single sign-on on **U.S. Bank Prepaid** side, you need to send the **App Federation Metadata Url** to [U.S. Bank Prepaid support team](mailto:web.access.management@usbank.com). They set this setting to have the SAML SSO connection set properly on both sides.
+
+### Create U.S. Bank Prepaid test user
+
+In this section, you create a user called Britta Simon in U.S. Bank Prepaid. Work with [U.S. Bank Prepaid support team](mailto:web.access.management@usbank.com) to add the users in the U.S. Bank Prepaid platform. Users must be created and activated before you use single sign-on.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to U.S. Bank Prepaid Sign on URL where you can initiate the login flow.
+
+* Go to U.S. Bank Prepaid Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the U.S. Bank Prepaid tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the U.S. Bank Prepaid for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure U.S. Bank Prepaid you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Waywedo Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/waywedo-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Way We Do | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Way We Do'
description: Learn how to configure single sign-on between Azure Active Directory and Way We Do.
Previously updated : 06/20/2019 Last updated : 02/25/2022
-# Tutorial: Integrate Way We Do with Azure Active Directory
+# Tutorial: Azure AD SSO integration with Way We Do
In this tutorial, you'll learn how to integrate Way We Do with Azure Active Directory (Azure AD). When you integrate Way We Do with Azure AD, you can:
In this tutorial, you'll learn how to integrate Way We Do with Azure Active Dire
* Enable your users to be automatically signed-in to Way We Do with their Azure AD accounts. * Manage your accounts in one central location - the Azure portal.
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
- ## Prerequisites
-To get started, you need the following items:
+To configure Azure AD integration with Way We Do, you need the following items:
-* An Azure AD subscription. If you don't have a subscription, you can get one-month free trial [here](https://azure.microsoft.com/pricing/free-trial/).
-* Way We Do single sign-on (SSO) enabled subscription.
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Way We Do single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD SSO in a test environment.
-* Way We Do supports **SP** initiated SSO
-* Way We Do supports **Just In Time** user provisioning
+* Way We Do supports **SP** initiated SSO.
+* Way We Do supports **Just In Time** user provisioning.
-## Adding Way We Do from the gallery
+## Add Way We Do from the gallery
To configure the integration of Way We Do into Azure AD, you need to add Way We Do from the gallery to your list of managed SaaS apps.
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
1. On the left navigation pane, select the **Azure Active Directory** service. 1. Navigate to **Enterprise Applications** and then select **All Applications**. 1. To add new application, select **New application**. 1. In the **Add from the gallery** section, type **Way We Do** in the search box. 1. Select **Way We Do** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-## Configure and test Azure AD single sign-on
+## Configure and test Azure AD SSO for Way We Do
Configure and test Azure AD SSO with Way We Do using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Way We Do.
-To configure and test Azure AD SSO with Way We Do, complete the following building blocks:
+To configure and test Azure AD SSO with Way We Do, perform the following steps:
1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
-2. **[Configure Way We Do SSO](#configure-way-we-do-sso)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Way We Do test user](#create-way-we-do-test-user)** - to have a counterpart of Britta Simon in Way We Do that is linked to the Azure AD representation of user.
-6. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Way We Do SSO](#configure-way-we-do-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Way We Do test user](#create-way-we-do-test-user)** - to have a counterpart of B.Simon in Way We Do that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-### Configure Azure AD SSO
+## Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
-1. In the [Azure portal](https://portal.azure.com/), on the **Way We Do** application integration page, find the **Manage** section and select **Single sign-on**.
+1. In the Azure portal, on the **Way We Do** application integration page, find the **Manage** section and select **Single sign-on**.
1. On the **Select a Single sign-on method** page, select **SAML**.
-1. On the **Set up Single Sign-On with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
+1. On the **Set up Single Sign-On with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
![Edit Basic SAML Configuration](common/edit-urls.png)
-1. On the **Basic SAML Configuration** page, enter the values for the following fields:
-
- a. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.waywedo.com/Authentication/ExternalSignIn`
+1. On the **Basic SAML Configuration** section, perform the following steps:
- b. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
+ a. In the **Identifier (Entity ID)** text box, type a URL using the following pattern:
`https://<SUBDOMAIN>.waywedo.com`
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
+ `https://<SUBDOMAIN>.waywedo.com/Authentication/ExternalSignIn`
+ > [!NOTE]
- > These values are not real. Update these values with the actual Sign on URL and Identifier. Contact [Way We Do Client support team](mailto:support@waywedo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+ > These values are not real. Update these values with the actual Identifier and Sign on URL. Contact [Way We Do Client support team](mailto:support@waywedo.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
1. On the **Set up Single Sign-On with SAML** page, in the **SAML Signing Certificate** section, find **Certificate (Raw)** and select **Download** to download the certificate and save it on your computer.
Follow these steps to enable Azure AD SSO in the Azure portal.
![Copy configuration URLs](common/copy-configuration-urls.png)
-### Configure Way We Do SSO
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Way We Do.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Way We Do**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Way We Do SSO
1. To automate the configuration within Way We Do, you need to install **My Apps Secure Sign-in browser extension** by clicking **Install the extension**.
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click the **person icon** in the top right corner of any page in Way We Do, then click **Account** in the dropdown menu.
- ![Way We Do account](./media/waywedo-tutorial/tutorial_waywedo_account.png)
+ ![Way We Do account](./media/waywedo-tutorial/account.png)
1. Click the **menu icon** to open the push navigation menu and Click **Single Sign On**.
- ![Way We Do single](./media/waywedo-tutorial/tutorial_waywedo_single.png)
+ ![Way We Do single](./media/waywedo-tutorial/single.png)
1. On the **Single sign-on setup** page, perform the following steps:
- ![Way We Do save](./media/waywedo-tutorial/tutorial_waywedo_save.png)
+ ![Way We Do save](./media/waywedo-tutorial/save.png)
1. Click the **Turn on single sign-on** toggle to **Yes** to enable Single Sign-On.
Follow these steps to enable Azure AD SSO in the Azure portal.
> Users added through single sign-on are added as general users and are not assigned a role in the system. An Administrator is able to go in and modify their security role as an editor or administrator and can also assign one or several Org Chart roles. 1. Click **Save** to persist your settings.-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Way We Do.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Way We Do**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
+
### Create Way We Do test user In this section, a user called Britta Simon is created in Way We Do. Way We Do supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Way We Do, a new one is created after authentication.
In this section, a user called Britta Simon is created in Way We Do. Way We Do s
> [!Note] > If you need to create a user manually, contact [Way We Do Client support team](mailto:support@waywedo.com).
-### Test SSO
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you select the Way We Do tile in the Access Panel, you should be automatically signed in to the Way We Do for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Way We Do Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Way We Do Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Way We Do tile in the My Apps, this will redirect to Way We Do Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Way We Do you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Zwayam Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/zwayam-tutorial.md
Title: 'Tutorial: Azure Active Directory integration with Zwayam | Microsoft Docs'
+ Title: 'Tutorial: Azure AD SSO integration with Zwayam'
description: Learn how to configure single sign-on between Azure Active Directory and Zwayam.
Previously updated : 03/29/2019 Last updated : 02/25/2022
-# Tutorial: Azure Active Directory integration with Zwayam
+# Tutorial: Azure AD SSO integration with Zwayam
-In this tutorial, you learn how to integrate Zwayam with Azure Active Directory (Azure AD).
-Integrating Zwayam with Azure AD provides you with the following benefits:
+In this tutorial, you'll learn how to integrate Zwayam with Azure Active Directory (Azure AD). When you integrate Zwayam with Azure AD, you can:
-* You can control in Azure AD who has access to Zwayam.
-* You can enable your users to be automatically signed-in to Zwayam (Single Sign-On) with their Azure AD accounts.
-* You can manage your accounts in one central location - the Azure portal.
-
-If you want to know more details about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+* Control in Azure AD who has access to Zwayam.
+* Enable your users to be automatically signed-in to Zwayam with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
## Prerequisites To configure Azure AD integration with Zwayam, you need the following items:
-* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/)
-* Zwayam single sign-on enabled subscription
+* An Azure AD subscription. If you don't have an Azure AD environment, you can get a [free account](https://azure.microsoft.com/free/).
+* Zwayam single sign-on enabled subscription.
## Scenario description In this tutorial, you configure and test Azure AD single sign-on in a test environment.
-* Zwayam supports **SP** initiated SSO
-
-## Adding Zwayam from the gallery
-
-To configure the integration of Zwayam into Azure AD, you need to add Zwayam from the gallery to your list of managed SaaS apps.
-
-**To add Zwayam from the gallery, perform the following steps:**
-
-1. In the **[Azure portal](https://portal.azure.com)**, on the left navigation panel, click **Azure Active Directory** icon.
-
- ![The Azure Active Directory button](common/select-azuread.png)
-
-2. Navigate to **Enterprise Applications** and then select the **All Applications** option.
-
- ![The Enterprise applications blade](common/enterprise-applications.png)
-
-3. To add new application, click **New application** button on the top of dialog.
-
- ![The New application button](common/add-new-app.png)
+* Zwayam supports **SP** initiated SSO.
-4. In the search box, type **Zwayam**, select **Zwayam** from result panel then click **Add** button to add the application.
+> [!NOTE]
+> Identifier of this application is a fixed string value so only one instance can be configured in one tenant.
- ![Zwayam in the results list](common/search-new-app.png)
+## Add Zwayam from the gallery
-## Configure and test Azure AD single sign-on
-
-In this section, you configure and test Azure AD single sign-on with Zwayam based on a test user called **Britta Simon**.
-For single sign-on to work, a link relationship between an Azure AD user and the related user in Zwayam needs to be established.
-
-To configure and test Azure AD single sign-on with Zwayam, you need to complete the following building blocks:
-
-1. **[Configure Azure AD Single Sign-On](#configure-azure-ad-single-sign-on)** - to enable your users to use this feature.
-2. **[Configure Zwayam Single Sign-On](#configure-zwayam-single-sign-on)** - to configure the Single Sign-On settings on application side.
-3. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with Britta Simon.
-4. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable Britta Simon to use Azure AD single sign-on.
-5. **[Create Zwayam test user](#create-zwayam-test-user)** - to have a counterpart of Britta Simon in Zwayam that is linked to the Azure AD representation of user.
-6. **[Test single sign-on](#test-single-sign-on)** - to verify whether the configuration works.
+To configure the integration of Zwayam into Azure AD, you need to add Zwayam from the gallery to your list of managed SaaS apps.
-### Configure Azure AD single sign-on
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Zwayam** in the search box.
+1. Select **Zwayam** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-In this section, you enable Azure AD single sign-on in the Azure portal.
+## Configure and test Azure AD SSO for Zwayam
-To configure Azure AD single sign-on with Zwayam, perform the following steps:
+Configure and test Azure AD SSO with Zwayam using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Zwayam.
-1. In the [Azure portal](https://portal.azure.com/), on the **Zwayam** application integration page, select **Single sign-on**.
+To configure and test Azure AD SSO with Zwayam, perform the following steps:
- ![Configure single sign-on link](common/select-sso.png)
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Zwayam SSO](#configure-zwayam-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Zwayam test user](#create-zwayam-test-user)** - to have a counterpart of B.Simon in Zwayam that is linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-2. On the **Select a Single sign-on method** dialog, select **SAML/WS-Fed** mode to enable single sign-on.
+## Configure Azure AD SSO
- ![Single sign-on select mode](common/select-saml-option.png)
+Follow these steps to enable Azure AD SSO in the Azure portal.
-3. On the **Set up Single Sign-On with SAML** page, click **Edit** icon to open **Basic SAML Configuration** dialog.
+1. In the Azure portal, on the **Zwayam** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
- ![Edit Basic SAML Configuration](common/edit-urls.png)
+ ![Edit Basic SAML Configuration](common/edit-urls.png)
4. On the **Basic SAML Configuration** section, perform the following steps:
- ![Zwayam Domain and URLs single sign-on information](common/sp-identifier.png)
+ a. In the **Identifier (Entity ID)** text box, type the URL:
+ `https://sso.zwayam.com/zwayam-saml/saml/metadata`
- a. In the **Sign on URL** text box, type a URL using the following pattern:
+ b. In the **Sign on URL** text box, type a URL using the following pattern:
`https://sso.zwayam.com/zwayam-saml/zwayam-saml/saml/login?idp=<SAML Entity ID>`
- b. In the **Identifier (Entity ID)** text box, type a URL:
- `https://sso.zwayam.com/zwayam-saml/saml/metadata`
- > [!NOTE] > The **Sign on URL** value is not real. Update the value with the actual Sign on URL. `<SAML Entity ID>` is the Azure AD Identifier value which is explained later in the tutorial.
To configure Azure AD single sign-on with Zwayam, perform the following steps:
![Copy configuration URLs](common/copy-configuration-urls.png)
- a. Login URL
-
- b. Azure AD Identifier
-
- c. Logout URL
-
-### Configure Zwayam Single Sign-On
-
-To configure single sign-on on **Zwayam** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Zwayam support team](mailto:opendoors@zwayam.com). They set this setting to have the SAML SSO connection set properly on both sides.
- ### Create an Azure AD test user
-The objective of this section is to create a test user in the Azure portal called Britta Simon.
-
-1. In the Azure portal, in the left pane, select **Azure Active Directory**, select **Users**, and then select **All users**.
-
- ![The "Users and groups" and "All users" links](common/users.png)
-
-2. Select **New user** at the top of the screen.
-
- ![New user Button](common/new-user.png)
-
-3. In the User properties, perform the following steps.
-
- ![The User dialog box](common/user-properties.png)
-
- a. In the **Name** field enter **BrittaSimon**.
-
- b. In the **User name** field type brittasimon@yourcompanydomain.extension. For example, BrittaSimon@contoso.com
+In this section, you'll create a test user in the Azure portal called B.Simon.
- c. Select **Show password** check box, and then write down the value that's displayed in the Password box.
-
- d. Click **Create**.
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
### Assign the Azure AD test user
-In this section, you enable Britta Simon to use Azure single sign-on by granting access to Zwayam.
-
-1. In the Azure portal, select **Enterprise Applications**, select **All applications**, then select **Zwayam**.
-
- ![Enterprise applications blade](common/enterprise-applications.png)
-
-2. In the applications list, select **Zwayam**.
-
- ![The Zwayam link in the Applications list](common/all-applications.png)
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Zwayam.
-3. In the menu on the left, select **Users and groups**.
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Zwayam**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you are expecting a role to be assigned to the users, you can select it from the **Select a role** dropdown. If no role has been set up for this app, you see "Default Access" role selected.
+1. In the **Add Assignment** dialog, click the **Assign** button.
- ![The "Users and groups" link](common/users-groups-blade.png)
+## Configure Zwayam SSO
-4. Click the **Add user** button, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add Assignment pane](common/add-assign-user.png)
-
-5. In the **Users and groups** dialog select **Britta Simon** in the Users list, then click the **Select** button at the bottom of the screen.
-
-6. If you are expecting any role value in the SAML assertion then in the **Select Role** dialog select the appropriate role for the user from the list, then click the **Select** button at the bottom of the screen.
-
-7. In the **Add Assignment** dialog click the **Assign** button.
+To configure single sign-on on **Zwayam** side, you need to send the downloaded **Federation Metadata XML** and appropriate copied URLs from Azure portal to [Zwayam support team](mailto:opendoors@zwayam.com). They set this setting to have the SAML SSO connection set properly on both sides.
### Create Zwayam test user In this section, you create a user called Britta Simon in Zwayam. Work with [Zwayam support team](mailto:opendoors@zwayam.com) to add the users in the Zwayam platform. Users must be created and activated before you use single sign-on.
-### Test single sign-on
+## Test SSO
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
+In this section, you test your Azure AD single sign-on configuration with following options.
-When you click the Zwayam tile in the Access Panel, you should be automatically signed in to the Zwayam for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
+* Click on **Test this application** in Azure portal. This will redirect to Zwayam Sign-on URL where you can initiate the login flow.
-## Additional Resources
+* Go to Zwayam Sign-on URL directly and initiate the login flow from there.
-- [List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory](./tutorial-list.md)
+* You can use Microsoft My Apps. When you click the Zwayam tile in the My Apps, this will redirect to Zwayam Sign-on URL. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
-- [What is application access and single sign-on with Azure Active Directory?](../manage-apps/what-is-single-sign-on.md)
+## Next steps
-- [What is Conditional Access in Azure Active Directory?](../conditional-access/overview.md)
+Once you configure Zwayam you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/whats-new.md
Applications that use the Azure Active Directory Verifiable Credentials service
| Tenant region | Request API endpoint POST | ||-|
-| Europe | https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request |
-| Non-EU | https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request |
+| Europe | `https://beta.eu.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` |
+| Non-EU | `https://beta.did.msidentity.com/v1.0/{tenantID}/verifiablecredentials/request` |
To confirm which endpoint you should use, we recommend checking your Azure AD tenant's region as described above. If the Azure AD tenant is in the EU, you should use the Europe endpoint.
advisor Advisor Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-cost-recommendations.md
Last updated 10/29/2021
Azure Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. You can get cost recommendations from the **Cost** tab on the Advisor dashboard.
-## How to access cost recommendations in Azure Advisor
- 1. Sign in to the [**Azure portal**](https://portal.azure.com). 1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
This is a special type of resize recommendation, where Advisor analyzes workload
- If the P95 of CPU is less than two times the burstable SKUs' baseline performance - If the current SKU does not have accelerated networking enabled (burstable SKUs donΓÇÖt support accelerated networking yet) - If we determine that the Burstable SKU credits are sufficient to support the average CPU utilization over 7 days-- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which is perfect for the B-series SKU.
+- The result is a recommendation suggesting that the user resize their current VM to a burstable SKU (with the same number of cores) to take advantage of the low costs and the fact that the workload has low average utilization but high spikes in cases, which can be best served by the B-series SKU.
Advisor shows the estimated cost savings for either recommended action: resize or shut down. For resize, Advisor provides current and target SKU information. To be more selective about the actioning on underutilized virtual machines, you can adjust the CPU utilization rule on a per-subscription basis.
In such cases simply use the Dismiss/Postpone options associated with the recomm
We are constantly working on improving these recommendations. Feel free to share feedback on [Advisor Forum](https://aka.ms/advisorfeedback).
-## Optimize spend for MariaDB, MySQL, and PostgreSQL servers by right-sizing
-Advisor analyses your usage and evaluates whether your MariaDB, MySQL, or PostgreSQL database server resources have been underutilized for an extended time over the past seven days. Low resource utilization results in unwanted expenditure that you can fix without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend that you reduce the compute size (vCores) by half.
-
-## Reduce costs by eliminating unprovisioned ExpressRoute circuits
-
-Advisor identifies Azure ExpressRoute circuits that have been in the provider status of **Not provisioned** for more than one month. It recommends deleting the circuit if you aren't planning to provision the circuit with your connectivity provider.
-
-## Reduce costs by deleting or reconfiguring idle virtual network gateways
-
-Advisor identifies virtual network gateways that have been idle for more than 90 days. Because these gateways are billed hourly, you should consider reconfiguring or deleting them if you don't intend to use them anymore.
-
-## Buy reserved virtual machine instances to save money over pay-as-you-go costs
-
-Advisor reviews your virtual machine usage over the past 30 days to determine if you could save money by purchasing an Azure reservation. Advisor shows you the regions and sizes where the potential for savings is highest and the estimated savings from purchasing reservations. With Azure reservations, you can pre-purchase the base costs for your virtual machines. Discounts automatically apply to new or existing VMs that have the same size and region as your reservations. [Learn more about Azure Reserved VM Instances.](https://azure.microsoft.com/pricing/reserved-vm-instances/)
-
-Advisor also notifies you of your reserved instances that will expire in the next 30 days. It recommends that you purchase new reserved instances to avoid pay-as-you-go pricing.
-
-## Buy reserved instances for several resource types to save over your pay-as-you-go costs
-
-Advisor analyzes usage patterns for the past 30 days for the following resources and recommends reserved capacity purchases that optimize costs.
-
-### Azure Cosmos DB reserved capacity
-Advisor analyzes your Azure Cosmos DB usage patterns for the past 30 days and recommends reserved capacity purchases to optimize costs. By using reserved capacity, you can pre-purchase Azure Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and by extrapolating the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### SQL Database and SQL Managed Instance reserved capacity
-Advisor analyzes SQL Database and SQL Managed Instance usage patterns over the past 30 days. It then recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase SQL DB hourly usage and save over your SQL compute costs. Your SQL license is charged separately and isn't discounted by the reservation. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and by extrapolating the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings. For details, see [Azure SQL Database & SQL Managed Instance reserved capacity](../azure-sql/database/reserved-capacity-overview.md).
-
-### App Service Stamp Fee reserved capacity
-Advisor analyzes the Stamp Fee usage pattern for your Azure App Service isolated environment over the past 30 days and recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase hourly usage for the isolated environment Stamp Fee and save over your pay-as-you-go costs. Note that reserved capacity applies only to the Stamp Fee and not to App Service instances. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates saving estimates for individual subscriptions by using 3-year reservation pricing based on usage patterns over the past 30 days.
-
-### Blob storage reserved capacity
-Advisor analyzes your Azure Blob storage and Azure Data Lake storage usage over the past 30 days. It then calculates reserved capacity purchases that optimize costs. With reserved capacity, you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved capacity applies only to data stored on Azure Blob general-purpose v2 and Azure Data Lake Storage Gen2 accounts. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### MariaDB, MySQL, and PostgreSQL reserved capacity
-Advisor analyzes your usage patterns for Azure Database for MariaDB, Azure Database for MySQL, and Azure Database for PostgreSQL over the past 30 days. It then recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase MariaDB, MySQL, and PostgreSQL hourly usage and save over your current costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-### Azure Synapse Analytics reserved capacity
-Advisor analyzes your Azure Synapse Analytics usage patterns over the past 30 days and recommends reserved capacity purchases that optimize costs. By using reserved capacity, you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved capacity is a billing benefit and automatically applies to new and existing deployments. Advisor calculates savings estimates for individual subscriptions by using 3-year reservation pricing and the usage patterns observed over the past 30 days. Shared scope recommendations are available for reserved capacity purchases and can increase savings.
-
-## Delete unassociated public IP addresses to save money
-
-Advisor identifies public IP addresses that aren't associated with Azure resources like load balancers and VMs. A nominal charge is associated with these public IP addresses. If you don't plan to use them, you can save money by deleting them.
-
-## Delete Azure Data Factory pipelines that are failing
-
-Advisor detects Azure Data Factory pipelines that repeatedly fail. It recommends that you resolve the problems or delete the pipelines if you don't need them. You're billed for these pipelines even if though they're not serving you while they're failing.
-
-## Use standard snapshots for managed disks
-To save 60% of cost, we recommend storing your snapshots in standard storage, regardless of the storage type of the parent disk. This option is the default option for managed disk snapshots. Advisor identifies snapshots that are stored in premium storage and recommends migrating then from premium to standard storage. [Learn more about managed disk pricing.](https://aka.ms/aa_manageddisksnapshot_learnmore)
-
-## Use lifecycle management
-By using intelligence about your Azure Blob storage object count, total size, and transactions, Advisor detects whether you should enable lifecycle management to tier data on one or more of your storage accounts. It prompts you to create lifecycle management rules to automatically tier your data to cool or archive storage to optimize your storage costs while retaining your data in Azure Blob storage for application compatibility.
-
-## Create an Ephemeral OS Disk recommendation
-[Ephemeral OS Disk](../virtual-machines/ephemeral-os-disks.md) allows you to:
-- Save on storage costs for OS disks. -- Get lower read/write latency to OS disks. -- Get faster VM reimage operations by resetting the OS (and temporary disk) to its original state.-
-It's preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads. Advisor provides recommendations for resources that can benefit from Ephemeral OS Disk.
-
-## Reduce Azure Data Explorer table cache-period (policy) for cluster cost optimization (Preview)
-Advisor identifies resources where reducing the table cache policy will free up Azure Data Explorer cluster nodes having low CPU utilization, memory, and a high cache size configuration.
-
-## Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container
-Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%. Cost savings amount represents potential savings from using the recommended manual throughput, based on usage in the past 7 days. Your actual savings may vary depending on the manual throughput you set and whether your average utilization of throughput continues to be similar to the time period analyzed. The estimated savings do not account for any discount that may apply to your account.
-
-## Enable autoscale on your Azure Cosmos DB database or container
-Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
- ## Next steps To learn more about Advisor recommendations, see:
+* [Advisor cost recommendations (full list)](advisor-reference-cost-recommendations.md)
* [Introduction to Advisor](advisor-overview.md) * [Advisor score](azure-advisor-score.md) * [Get started with Advisor](advisor-get-started.md)
-* [Advisor performance recommendations](advisor-performance-recommendations.md)
-* [Advisor reliability recommendations](advisor-high-availability-recommendations.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
* [Advisor security recommendations](advisor-security-recommendations.md)
-* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
advisor Advisor Reference Cost Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-cost-recommendations.md
+
+ Title: Cost recommendations
+description: Full list of available cost recommendations in Advisor.
+ Last updated : 02/04/2022++
+# Cost recommendations
+
+Azure Advisor helps you optimize and reduce your overall Azure spend by identifying idle and underutilized resources. You can get cost recommendations from the **Cost** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Cost** tab.
+
+## Compute
+
+### Use Standard Storage to store Managed Disks snapshots
+
+To save 60% of cost, we recommend storing your snapshots in Standard Storage, regardless of the storage type of the parent disk. This is the default option for Managed Disks snapshots. Migrate your snapshot from Premium to Standard Storage. Refer to Managed Disks pricing details.
+
+Learn more about [Managed Disk Snapshot - ManagedDiskSnapshot (Use Standard Storage to store Managed Disks snapshots)](https://aka.ms/aa_manageddisksnapshot_learnmore).
+
+### Right-size or shutdown underutilized virtual machines
+
+We've analyzed the usage patterns of your virtual machine over the past 7 days and identified virtual machines with low usage. While certain scenarios can result in low utilization by design, you can often save money by managing the size and number of virtual machines.
+
+Learn more about [Virtual machine - LowUsageVmV2 (Right-size or shutdown underutilized virtual machines)](https://aka.ms/aa_lowusagerec_learnmore).
+
+### You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.
+
+We have observed that you have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk. Note that if you decide to delete the disk, recovery is not possible. We recommend that you create a snapshot before deletion or ensure the data in the disk is no longer required.
+
+Learn more about [Disk - DeleteOrDowngradeUnattachedDisks (You have disks which have not been attached to a VM for more than 30 days. Please evaluate if you still need the disk.)](https://aka.ms/unattacheddisks).
+
+## MariaDB
+
+### Right-size underutilized MariaDB servers
+
+Our internal telemetry shows that the MariaDB database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [MariaDB server - OrcasMariaDbCpuRightSize (Right-size underutilized MariaDB servers)](https://aka.ms/mariadbpricing).
+
+## MySQL
+
+### Right-size underutilized MySQL servers
+
+Our internal telemetry shows that the MySQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [MySQL server - OrcasMySQLCpuRightSize (Right-size underutilized MySQL servers)](https://aka.ms/mysqlpricing).
+
+## PostgreSQL
+
+### Right-size underutilized PostgreSQL servers
+
+Our internal telemetry shows that the PostgreSQL database server resources have been underutilized for an extended period of time over the last 7 days. Low resource utilization results in unwanted expenditure which can be fixed without significant performance impact. To reduce your costs and efficiently manage your resources, we recommend reducing the compute size (vCores) by half.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCpuRightSize (Right-size underutilized PostgreSQL servers)](https://aka.ms/postgresqlpricing).
+
+## Cosmos DB
+
+### Review the configuration of your Azure Cosmos DB free tier account
+
+Your Azure Cosmos DB free tier account is currently containing resources with a total provisioned throughput exceeding 1000 Request Units per second (RU/s). Because Azure Cosmos DB's free tier only covers the first 1000 RU/s of throughput provisioned across your account, any throughput beyond 1000 RU/s will be billed at the regular pricing. As a result, we anticipate that you will get charged for the throughput currently provisioned on your Azure Cosmos DB account.
+
+Learn more about [Cosmos DB account - CosmosDBFreeTierOverage (Review the configuration of your Azure Cosmos DB free tier account)](/azure/cosmos-db/understand-your-bill#azure-free-tier).
+
+### Consider taking action on your idle Azure Cosmos DB containers
+
+We haven't detected any activity over the past 30 days on one or more of your Azure Cosmos DB containers. Consider lowering their throughput, or deleting them if you don't plan on using them.
+
+Learn more about [Cosmos DB account - CosmosDBIdleContainers (Consider taking action on your idle Azure Cosmos DB containers)](/azure/cosmos-db/how-to-provision-container-throughput).
+
+### Enable autoscale on your Azure Cosmos DB database or container
+
+Based on your usage in the past 7 days, you can save by enabling autoscale. For each hour, we compared the RU/s provisioned to the actual utilization of the RU/s (what autoscale would have scaled to) and calculated the cost savings across the time period. Autoscale helps optimize your cost by scaling down RU/s when not in use.
+
+Learn more about [Cosmos DB account - CosmosDBAutoscaleRecommendations (Enable autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/provision-throughput-autoscale).
+
+### Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container
+
+Based on your usage in the past 7 days, you can save by using manual throughput instead of autoscale. Manual throughput is more cost-effective when average utilization of your max throughput (RU/s) is greater than 66% or less than or equal to 10%.
+
+Learn more about [Cosmos DB account - CosmosDBMigrateToManualThroughputFromAutoscale (Configure manual throughput instead of autoscale on your Azure Cosmos DB database or container)](/azure/cosmos-db/how-to-choose-offer).
+
+## Data Explorer
+
+### Unused/Empty Data Explorer resources
+
+This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found either empty or with no activity. The recommended action is to validate and consider deleting the resources.
+
+Learn more about [Data explorer resource - ADX Unused resource (Unused/Empty Data Explorer resources)](https://aka.ms/adxemptycluster).
+
+### Right-size Data Explorer resources for optimal cost
+
+One or more of these were detected: Low data capacity, CPU utilization, or memory utilization. The recommended action to improve the performance is to scale down and/or scale in the resource to the recommended configuration shown.
+
+Learn more about [Data explorer resource - Right-size for cost (Right-size Data Explorer resources for optimal cost)](https://aka.ms/adxskusize).
+
+### Reduce Data Explorer table cache policy to optimize costs
+
+Reducing the table cache policy will free up Data Explorer cluster nodes with low CPU utilization, memory, and a high cache size configuration.
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTables (Reduce Data Explorer table cache policy to optimize costs)](https://aka.ms/adxcachepolicy).
+
+### Unused Data Explorer resources with data
+
+This recommendation surfaces all Data Explorer resources provisioned more than 10 days from the last update, and found containing data but with no activity. The recommended action is to validate and consider stopping the unused resources.
+
+Learn more about [Data explorer resource - StopUnusedClustersWithData (Unused Data Explorer resources with data)](https://aka.ms/adxunusedcluster).
+
+### Cleanup unused storage in Data Explorer resources
+
+Over time, internal extents merge operations can accumulate redundant and unused storage artifacts that remain beyond the data retention period. While this unreferenced data doesnΓÇÖt negatively impact the performance, it can lead to more storage use and larger costs than necessary. This recommendation surfaces Data Explorer resources that have unused storage artifacts. The recommended action is to run the cleanup command to detect and delete unused storage artifacts and reduce cost. Note that data recoverability will be reset to the cleanup time and will not be available on data that was created before running the cleanup.
+
+Learn more about [Data explorer resource - RunCleanupCommandForAzureDataExplorer (Cleanup unused storage in Data Explorer resources)](https://aka.ms/adxcleanextentcontainers).
+
+### Enable optimized autoscale for Data Explorer resources
+
+Looks like your resource could have automatically scaled to reduce costs (based on the usage patterns, cache utilization, ingestion utilization, and CPU). To optimize costs and performance, we recommend enabling optimized autoscale. To make sure you don't exceed your planned budget, add a maximum instance count when you enable this.
+
+Learn more about [Data explorer resource - EnableOptimizedAutoscaleAzureDataExplorer (Enable optimized autoscale for Data Explorer resources)](https://aka.ms/adxoptimizedautoscale).
+
+## Network
+
+### Delete ExpressRoute circuits in the provider status of Not Provisioned
+
+We noticed that your ExpressRoute circuit is in the provider status of Not Provisioned for more than one month. This circuit is currently billed hourly to your subscription. We recommend that you delete the circuit if you aren't planning to provision the circuit with your connectivity provider.
+
+Learn more about [ExpressRoute circuit - ExpressRouteCircuit (Delete ExpressRoute circuits in the provider status of Not Provisioned)](https://aka.ms/expressroute).
+
+### Repurpose or delete idle virtual network gateways
+
+We noticed that your virtual network gateway has been idle for over 90 days. This gateway is being billed hourly. You may want to reconfigure this gateway, or delete it if you do not intend to use it anymore.
+
+Learn more about [Virtual network gateway - IdleVNetGateway (Repurpose or delete idle virtual network gateways)](https://aka.ms/aa_idlevpngateway_learnmore).
+
+## Recovery Services
+
+### Use differential or incremental backup for database workloads
+
+For SQL/HANA DBs in Azure VMs being backed up to Azure, using daily differential with weekly full backup is often more cost-effective than daily fully backups. For HANA, Azure Backup also supports incremental backup which is even more cost effective
+
+Learn more about [Recovery Services vault - Optimize costs of database backup (Use differential or incremental backup for database workloads)](https://aka.ms/DBBackupCostOptimization).
+
+## Storage
+
+### Revisit retention policy for classic log data in storage accounts
+
+Large classic log data is detected on your storage accounts. You are billed on capacity of data stored in storage accounts including classic logs. You are recommended to check the retention policy of classic logs and update with necessary period to retain less log data. This would reduce unnecessary classic log data and save your billing cost from less capacity.
+
+Learn more about [Storage Account - XstoreLargeClassicLog (Revisit retention policy for classic log data in storage accounts)]( /azure/storage/common/manage-storage-analytics-logs#modify-retention-policy).
+
+## Reserved Instances
+
+### Configure automatic renewal for your expiring reservation
+
+Reserved instances listed below are expiring soon or recently expired. Your resources will continue to operate normally, however, you will be billed at the on-demand rates going forward. To optimize your costs, configure automatic renewal for these reservations or purchase a replacement manually.
+
+Learn more about [Reservation - ReservedInstancePurchaseNew (Configure automatic renewal for your expiring reservation)](https://aka.ms/reservedinstances).
+
+### Buy virtual machine reserved instances to save money over pay-as-you-go costs
+
+Reserved instances can provide a significant discount over pay-as-you-go prices. With reserved instances, you can pre-purchase the base costs for your virtual machines. Discounts will automatically apply to new or existing VMs that have the same size and region as your reserved instance. We analyzed your usage over the last 30 days and recommend money-saving reserved instances.
+
+Learn more about [Virtual machine - ReservedInstance (Buy virtual machine reserved instances to save money over pay-as-you-go costs)](https://aka.ms/reservedinstances).
+
+### Consider Cosmos DB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Cosmos DB usage pattern over last 30 days and calculate reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cosmos DB hourly usage and save over your pay-as-you-go costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings even more.
+
+Learn more about [Subscription - CosmosDBReservedCapacity (Consider Cosmos DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your SQL PaaS usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for your SQL PaaS deployments and save over your SQL PaaS compute costs. SQL license is charged separately and is not discounted by the reservation. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SQLReservedCapacity (Consider SQL PaaS DB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider App Service stamp fee reserved instance to save over your on-demand costs
+
+We analyzed your App Service isolated environment stamp fees usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase hourly usage for the isolated environment stamp fee and save over your Pay-as-you-go costs. Note that reserved instance only applies to the stamp fee and not to the App Service instances. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions based on usage pattern over last 30 days.
+
+Learn more about [Subscription - AppServiceReservedCapacity (Consider App Service stamp fee reserved instance to save over your on-demand costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Azure Database for MariaDB usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MariaDB hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - MariaDBSQLReservedCapacity (Consider Database for MariaDB reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for MySQL reserved instance to save over your pay-as-you-go costs
+
+We analyzed your MySQL Database usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase MySQL hourly usage and save over your compute costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - MySQLReservedCapacity (Consider Database for MySQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Database for PostgreSQL usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase PostgresSQL Database hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - PostgreSQLReservedCapacity (Consider Database for PostgreSQL reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Cache for Redis reserved instance to save over your pay-as-you-go costs
+
+We analyzed your Cache for Redis usage pattern over last 30 days and calculated reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Cache for Redis hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedisCacheReservedCapacity (Consider Cache for Redis reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs
+
+We analyze you Azure Synapse Analytics usage pattern over last 30 days and recommend reserved instance purchase that maximizes your savings. With reserved instance you can pre-purchase Synapse Analytics hourly usage and save over your on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SQLDWReservedCapacity (Consider Azure Synapse Analytics (formerly SQL DW) reserved instance to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### (Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs
+
+We analyzed your Azure Blob and Datalake storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Blob storage reserved instance applies only to data stored on Azure Blob (GPv2) and Azure Data Lake Storage (Gen 2). Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - BlobReservedCapacity ((Preview) Consider Blob storage reserved instance to save on Blob v2 and Datalake storage Gen2 costs)](https://aka.ms/rirecommendations).
+
+### (Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs
+
+We analyzed your Azure Data Explorer usage pattern over last 30 days and recommend reserved capacity purchase that maximizes your savings. With reserved capacity you can pre-purchase Data Explorer hourly usage and get savings over your on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and last 30 day's usage pattern. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataExplorerReservedCapacity ((Preview) Consider Azure Data explorer reserved capacity to save over your pay-as-you-go costs)](https://aka.ms/rirecommendations).
+
+### Consider Azure Dedicated Host reserved instance to save over your on-demand costs
+
+We analyzed your Azure Dedicated Host usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureDedicatedHostReservedCapacity (Consider Azure Dedicated Host reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Data Factory reserved instance to save over your on-demand costs
+
+We analyzed your Data Factory usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataFactorybReservedCapacity (Consider Data Factory reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Data Explorer reserved instance to save over your on-demand costs
+
+We analyzed your Azure Data Explorer usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureDataExplorerReservedCapacity (Consider Azure Data Explorer reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Files reserved instance to save over your on-demand costs
+
+We analyzed your Azure Files usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureFilesReservedCapacity (Consider Azure Files reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure VMware Solution reserved instance to save over your on-demand costs
+
+We analyzed your Azure VMware Solution usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureVMwareSolutionReservedCapacity (Consider Azure VMware Solution reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### (Preview) Consider Databricks reserved capacity to save over your on-demand costs
+
+We analyzed your Databricks usage over last 30 days and calculated reserved capacity purchase that would maximize your savings. With reserved capacity you can pre-purchase hourly usage and save over your current on-demand costs. Reserved capacity is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions using 3-year reservation pricing and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - DataBricksReservedCapacity ((Preview) Consider Databricks reserved capacity to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider NetApp Storage reserved instance to save over your on-demand costs
+
+We analyzed your NetApp Storage usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - NetAppStorageReservedCapacity (Consider NetApp Storage reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Azure Managed Disk reserved instance to save over your on-demand costs
+
+We analyzed your Azure Managed Disk usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - AzureManagedDiskReservedCapacity (Consider Azure Managed Disk reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider Red Hat reserved instance to save over your on-demand costs
+
+We analyzed your Red Hat usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedHatReservedCapacity (Consider Red Hat reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider RedHat Osa reserved instance to save over your on-demand costs
+
+We analyzed your RedHat Osa usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - RedHatOsaReservedCapacity (Consider RedHat Osa reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider SapHana reserved instance to save over your on-demand costs
+
+We analyzed your SapHana usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SapHanaReservedCapacity (Consider SapHana reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider SuseLinux reserved instance to save over your on-demand costs
+
+We analyzed your SuseLinux usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - SuseLinuxReservedCapacity (Consider SuseLinux reserved instance to save over your on-demand costs)](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Consider VMware Cloud Simple reserved instance
+
+We analyzed your VMware Cloud Simple usage over last 30 days and calculated reserved instance purchase that would maximize your savings. With reserved instance you can pre-purchase hourly usage and save over your current on-demand costs. Reserved instance is a billing benefit and will automatically apply to new or existing deployments. Saving estimates are calculated for individual subscriptions and the usage pattern observed over last 30 days. Shared scope recommendations are available in reservation purchase experience and can increase savings further.
+
+Learn more about [Subscription - VMwareCloudSimpleReservedCapacity (Consider VMware Cloud Simple reserved instance )](/azure/cost-management-billing/reservations/reserved-instance-purchase-recommendations).
+
+### Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance
+
+With Ephemeral OS Disk, Customers get these benefits: Save on storage cost for OS disk. Get lower read/write latency to OS disk. Faster VM Reimage operation by resetting OS (and Temporary disk) to its original state. It is more preferable to use Ephemeral OS Disk for short-lived IaaS VMs or VMs with stateless workloads
+
+Learn more about [Subscription - EphemeralOsDisk (Use Virtual Machines with Ephemeral OS Disk enabled to save cost and get better performance)](/azure/virtual-machines/windows/ephemeral-os-disks).
+
+## Synapse
+
+### Consider enabling autopause feature on Spark compute.
+
+Auto-pause releases and shuts down unused compute resources after a set idle period of inactivity
+
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoPauseGuidance (Consider enabling autopause feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoPauseGuidance).
+
+### Consider enabling autoscale feature on Spark compute.
+
+Apache Spark for Azure Synapse Analytics pool's Autoscale feature automatically scales the number of nodes in a cluster instance up and down. During the creation of a new Apache Spark for Azure Synapse Analytics pool, a minimum and maximum number of nodes can be set when Autoscale is selected. Autoscale then monitors the resource requirements of the load and scales the number of nodes up or down. There's no additional charge for this feature.
+
+Learn more about [Synapse workspace - EnableSynapseSparkComputeAutoScaleGuidance (Consider enabling autoscale feature on spark compute.)](https://aka.ms/EnableSynapseSparkComputeAutoScaleGuidance).
++
+## Next steps
+
+Learn more about [Cost Optimization - Microsoft Azure Well Architected Framework](/azure/architecture/framework/cost/overview)
advisor Advisor Reference Operational Excellence Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-operational-excellence-recommendations.md
+
+ Title: Operational excellence recommendations
+description: Operational excellence recommendations
+ Last updated : 02/02/2022++
+# Operational excellence recommendations
+
+Operational excellence recommendations in Azure Advisor can help you with:
+- Process and workflow efficiency.
+- Resource manageability.
+- Deployment best practices.
+
+You can get these recommendations on the **Operational Excellence** tab of the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Operational Excellence** tab.
++
+## Spring Cloud
+
+### Update your outdated Azure Spring Cloud SDK to the latest version
+
+We have identified API calls from an outdated Azure Spring Cloud SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Spring Cloud Service - SpringCloudUpgradeOutdatedSDK (Update your outdated Azure Spring Cloud SDK to the latest version)](/azure/spring-cloud).
+
+### Update Azure Spring Cloud API Version
+
+We have identified API calls from outdated Azure Spring Cloud API for resources under this subscription. We recommend switching to the latest Spring Cloud API version. You need to update your existing code to use the latest API version. Also, you need to upgrade your Azure SDK and Azure CLI to the latest version. This ensures you receive the latest features and performance improvements.
+
+Learn more about [Spring Cloud Service - UpgradeAzureSpringCloudAPI (Update Azure Spring Cloud API Version)](/azure/spring-cloud).
+
+## Automation
+
+### Upgrade to Start/Stop VMs v2
+
+This new version of Start/Stop VMs v2 (preview) provides a decentralized low-cost automation option for customers who want to optimize their VM costs. It offers all of the same functionality as the original version available with Azure Automation, but it is designed to take advantage of newer technology in Azure.
+
+Learn more about [Automation account - SSV1_Upgrade (Upgrade to Start/Stop VMs v2)](https://aka.ms/startstopv2docs).
+
+## Batch
+
+### Recreate your pool to get the latest node agent features and fixes
+
+Your pool has an old node agent. Consider recreating your pool to get the latest node agent updates and bug fixes.
+
+Learn more about [Batch account - OldPool (Recreate your pool to get the latest node agent features and fixes)](https://aka.ms/batch_oldpool_learnmore).
+
+### Delete and recreate your pool to remove a deprecated internal component
+
+Your pool is using a deprecated internal component. Please delete and recreate your pool for improved stability and performance.
+
+Learn more about [Batch account - RecreatePool (Delete and recreate your pool to remove a deprecated internal component)](https://aka.ms/batch_deprecatedcomponent_learnmore).
+
+### Upgrade to the latest API version to ensure your Batch account remains operational.
+
+In the past 14 days, you have invoked a Batch management or service API version that is scheduled for deprecation. Upgrade to the latest API version to ensure your Batch account remains operational.
+
+Learn more about [Batch account - UpgradeAPI (Upgrade to the latest API version to ensure your Batch account remains operational.)](https://aka.ms/batch_deprecatedapi_learnmore).
+
+### Delete and recreate your pool using a VM size that will soon be retired
+
+Your pool is using A8-A11 VMs, which are set to be retired in March 2021. Please delete your pool and recreate it with a different VM size.
+
+Learn more about [Batch account - RemoveA8_A11Pools (Delete and recreate your pool using a VM size that will soon be retired)](https://aka.ms/batch_a8_a11_retirement_learnmore).
+
+### Recreate your pool with a new image
+
+Your pool is using an image with an imminent expiration date. Please recreate the pool with a new image to avoid potential interruptions. A list of newer images is available via the ListSupportedImages API.
+
+Learn more about [Batch account - EolImage (Recreate your pool with a new image)](https://aka.ms/batch_expiring_image_learn_more).
+
+## Cognitive Service
+
+### Upgrade to the latest version of the Immersive Reader SDK
+
+We have identified resources under this subscription using outdated versions of the Immersive Reader SDK. Using the latest version of the Immersive Reader SDK provides you with updated security, performance and an expanded set of features for customizing and enhancing your integration experience.
+
+Learn more about [Cognitive Service - ImmersiveReaderSDKRecommendation (Upgrade to the latest version of the Immersive Reader SDK)](https://aka.ms/ImmersiveReaderAzureAdvisorSDKLearnMore).
+
+## Compute
+
+### Increase the number of compute resources you can deploy by 10 vCPU
+
+If quota limits are exceeded, new VM deployments will be blocked until quota is increased. Increase your quota now to enable deployment of more resources. Learn More
+
+Learn more about [Virtual machine - IncreaseQuotaExperiment (Increase the number of compute resources you can deploy by 10 vCPU)](https://aka.ms/SubscriptionServiceLimits).
+
+### Add Azure Monitor to your virtual machine (VM) labeled as production
+
+Azure Monitor for VMs monitors your Azure virtual machines (VM) and virtual machine scale sets at scale. It analyzes the performance and health of your Windows and Linux VMs, and it monitors their processes and dependencies on other resources and external processes. It includes support for monitoring performance and application dependencies for VMs that are hosted on-premises or in another cloud provider.
+
+Learn more about [Virtual machine - AddMonitorProdVM (Add Azure Monitor to your virtual machine (VM) labeled as production)](/azure/azure-monitor/insights/vminsights-overview).
+
+### Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.
+
+Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers. This can be viewed as malicious traffic and blocked by the DDOS service in the Azure environment
+
+Learn more about [Virtual machine - GetVmlistFortigateNtpIssue (Excessive NTP client traffic caused by frequent DNS lookups and NTP sync for new servers, which happens often on some global NTP servers.)](https://docs.fortinet.com/document/fortigate/6.2.3/fortios-release-notes/236526/known-issues).
+
+### An Azure environment update has been rolled out that may affect you Checkpoint Firewall.
+
+The image version of the Checkpoint firewall installed may have been affected by the recent Azure environment update. A kernel panic resulting in a reboot to factory defaults can occur in certain circumstances.
+
+Learn more about [Virtual machine - NvaCheckpointNicServicing (An Azure environment update has been rolled out that may affect you Checkpoint Firewall.)](https://supportcenter.checkpoint.com/supportcenter/portal).
+
+### The iControl REST interface has an unauthenticated remote command execution vulnerability.
+
+This vulnerability allows for unauthenticated attackers with network access to the iControl REST interface, through the BIG-IP management interface and self IP addresses, to execute arbitrary system commands, create or delete files, and disable services. This vulnerability can only be exploited through the control plane and cannot be exploited through the data plane. Exploitation can lead to complete system compromise. The BIG-IP system in Appliance mode is also vulnerable
+
+Learn more about [Virtual machine - GetF5vulnK03009991 (The iControl REST interface has an unauthenticated remote command execution vulnerability.)](https://support.f5.com/csp/article/K03009991).
+
+### NVA Accelerated Networking enabled but potentially not working.
+
+Desired state for Accelerated Networking is set to ΓÇÿtrueΓÇÖ for one or more interfaces on this VM, but actual state for accelerated networking is not enabled.
+
+Learn more about [Virtual machine - GetVmListANDisabled (NVA Accelerated Networking enabled but potentially not working.)](/azure/virtual-network/create-vm-accelerated-networking-cli).
+
+### Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.
+
+We have identified that your Virtual Machine might be running a version of software image that is running drivers for Accelerated Networking (AN) that are not compatible with the Azure environment. It has a synthetic network interface which, either, is AN capable but may disconnect during a maintenance or NIC operation. It is recommended that you upgrade to the latest version of the image that addresses this issue. Please contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - GetCitrixVFRevokeError (Upgrade Citrix load balancers to avoid connectivity issues during NIC maintenance operations.)](https://www.citrix.com/support/).
+
+## Kubernetes
+
+### Update cluster's service principal
+
+This cluster's service principal is expired and the cluster will not be healthy until the service principal is updated
+
+Learn more about [Kubernetes service - UpdateServicePrincipal (Update cluster's service principal)](/azure/aks/update-credentials).
+
+### Monitoring addon workspace is deleted
+
+Monitoring addon workspace is deleted. Correct issues to setup monitoring addon.
+
+Learn more about [Kubernetes service - MonitoringAddonWorkspaceIsDeleted (Monitoring addon workspace is deleted)](https://aka.ms/aks-disable-monitoring-addon).
+
+### Deprecated Kubernetes API in 1.16 is found
+
+Deprecated Kubernetes API in 1.16 is found. Avoid using deprecated API.
+
+Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn116IsFound (Deprecated Kubernetes API in 1.16 is found)](https://aka.ms/aks-deprecated-k8s-api-1.16).
+
+### Enable the Cluster Autoscaler
+
+This cluster has not enabled AKS Cluster Autoscaler, and it will not adapt to changing load conditions unless you have other ways to autoscale your cluster
+
+Learn more about [Kubernetes service - EnableClusterAutoscaler (Enable the Cluster Autoscaler)](/azure/aks/cluster-autoscaler).
+
+### The AKS node pool subnet is full
+
+Some of the subnets for this cluster's node pools are full and cannot take any more worker nodes. Using the Azure CNI plugin requires to reserve IP addresses for each node and all the pods for the node at node provisioning time. If there is not enough IP address space in the subnet, no worker nodes can be deployed. Additionally, the AKS cluster cannot be upgraded if the node subnet is full.
+
+Learn more about [Kubernetes service - NodeSubnetIsFull (The AKS node pool subnet is full)](/azure/aks/use-multiple-node-pools#add-a-node-pool-with-a-unique-subnet-preview).
+
+### Disable the Application Routing Addon
+
+This cluster has Pod Security Policies enabled, which are going to be deprecated in favor of Azure Policy for AKS
+
+Learn more about [Kubernetes service - UseAzurePolicyForKubernetes (Disable the Application Routing Addon)](/azure/aks/use-pod-security-on-azure-policy).
+
+### Use Ephemeral OS disk
+
+This cluster is not using ephemeral OS disks which can provide lower read/write latency, along with faster node scaling and cluster upgrades
+
+Learn more about [Kubernetes service - UseEphemeralOSdisk (Use Ephemeral OS disk)](/azure/aks/cluster-configuration#ephemeral-os).
+
+### Use Uptime SLA
+
+This cluster has not enabled Uptime SLA, and it limited to an SLO of 99.5%
+
+Learn more about [Kubernetes service - UseUptimeSLA (Use Uptime SLA)](/azure/aks/uptime-sla).
+
+### Deprecated Kubernetes API in 1.22 has been found
+
+Deprecated Kubernetes API in 1.22 has been found. Avoid using deprecated APIs.
+
+Learn more about [Kubernetes service - DeprecatedKubernetesAPIIn122IsFound (Deprecated Kubernetes API in 1.22 has been found)](https://aka.ms/aks-deprecated-k8s-api-1.22).
+
+## Desktop Virtualization
+
+### Permissions missing for start VM on connect
+
+We have determined you have enabled start VM on connect but didn't gave the Azure Virtual Desktop the rights to power manage VMs in your subscription. As a result your users connecting to host pools won't receive a remote desktop session. Review feature documentation for requirements.
+
+Learn more about [Host Pool - AVDStartVMonConnect (Permissions missing for start VM on connect)](https://aka.ms/AVDStartVMRequirement).
+
+### No validation environment enabled
+
+We have determined that you do not have a validation environment enabled in current subscription. When creating your host pools, you have selected "No" for "Validation environment" in the properties tab. Having at least one host pool with a validation environment enabled ensures the business continuity through Windows Virtual Desktop service deployments with early detection of potential issues.
+
+Learn more about [Host Pool - ValidationEnvHostPools (No validation environment enabled)](/azure/virtual-desktop/create-validation-host-pool).
+
+### Not enough production environments enabled
+
+We have determined that too many of your host pools have Validation Environment enabled. In order for Validation Environments to best serve their purpose, you should have at least one, but never more than half of your host pools in Validation Environment. By having a healthy balance between your host pools with Validation Environment enabled and those with it disabled, you will best be able to utilize the benefits of the multistage deployments that Windows Virtual Desktop offers with certain updates. To fix this issue, open your host pool's properties and select "No" next to the "Validation Environment" setting.
+
+Learn more about [Host Pool - ProductionEnvHostPools (Not enough production environments enabled)](/azure/virtual-desktop/create-host-pools-powershell).
+
+## Cosmos DB
+
+### Migrate Azure Cosmos DB attachments to Azure Blob Storage
+
+We noticed that your Azure Cosmos collection is using the legacy attachments feature. We recommend migrating attachments to Azure Blob Storage to improve the resiliency and scalability of your blob data.
+
+Learn more about [Cosmos DB account - CosmosDBAttachments (Migrate Azure Cosmos DB attachments to Azure Blob Storage)](/azure/cosmos-db/attachments#migrating-attachments-to-azure-blob-storage).
+
+### Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup
+
+Your Azure Cosmos DB accounts are configured with periodic backup. Continuous backup with point-in-time restore is now available on these accounts. With continuous backup, you can restore your data to any point in time within the past 30 days. Continuous backup may also be more cost-effective as a single copy of your data is retained.
+
+Learn more about [Cosmos DB account - CosmosDBMigrateToContinuousBackup (Improve resiliency by migrating your Azure Cosmos DB accounts to continuous backup)](/azure/cosmos-db/continuous-backup-restore-introduction).
+
+## Insights
+
+### Repair your log alert rule
+
+We have detected that one or more of your alert rules have invalid queries specified in their condition section. Log alert rules are created in Azure Monitor and are used to run analytics queries at specified intervals. The results of the query determine if an alert needs to be triggered. Analytics queries may become invalid overtime due to changes in referenced resources, tables, or commands. We recommend that you correct the query in the alert rule to prevent it from getting auto-disabled and ensure monitoring coverage of your resources in Azure.
+
+Learn more about [Alert Rule - ScheduledQueryRulesLogAlert (Repair your log alert rule)](https://aka.ms/aa_logalerts_queryrepair).
+
+### Log alert rule was disabled
+
+The alert rule was disabled by Azure Monitor as it was causing service issues. To enable the alert rule, contact support.
+
+Learn more about [Alert Rule - ScheduledQueryRulesRp (Log alert rule was disabled)](https://aka.ms/aa_logalerts_queryrepair).
+
+## Key Vault
+
+### Create a backup of HSM
+
+Create a periodic HSM backup to prevent data loss and have ability to recover the HSM in case of a disaster.
+
+Learn more about [Managed HSM Service - CreateHSMBackup (Create a backup of HSM)](/azure/key-vault/managed-hsm/best-practices#backup).
+
+## Data Explorer
+
+### Reduce the cache policy on your Data Explorer tables
+
+Reduce the table cache policy to match the usage patterns (query lookback period)
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesOperationalExcellence (Reduce the cache policy on your Data Explorer tables)](https://aka.ms/adxcachepolicy).
+
+## Networking
+
+### Resolve Azure Key Vault issue for your Application Gateway
+
+We've detected that one or more of your Application Gateways has been misconfigured to obtain their listener certificate(s) from Key Vault, which may result in operational issues. You should fix this misconfiguration immediately to avoid operational issues for your Application Gateway.
+
+Learn more about [Application gateway - AppGwAdvisorRecommendationForKeyVaultErrors (Resolve Azure Key Vault issue for your Application Gateway)](https://aka.ms/agkverror).
+
+### Application Gateway does not have enough capacity to scale out
+
+We've detected that your Application Gateway subnet does not have enough capacity for allowing scale out during high traffic conditions, which can cause downtime.
+
+Learn more about [Application gateway - AppgwRestrictedSubnetSpace (Application Gateway does not have enough capacity to scale out)](https://aka.ms/application-gateway-faq).
+
+### Enable Traffic Analytics to view insights into traffic patterns across Azure resources
+
+Traffic Analytics is a cloud-based solution that provides visibility into user and application activity in Azure. Traffic analytics analyzes Network Watcher network security group (NSG) flow logs to provide insights into traffic flow. With traffic analytics, you can view top talkers across Azure and non Azure deployments, investigate open ports, protocols and malicious flows in your environment and optimize your network deployment for performance. You can process flow logs at 10 mins and 60 mins processing intervals, giving you faster analytics on your traffic.
+
+Learn more about [Network Security Group - NSGFlowLogsenableTA (Enable Traffic Analytics to view insights into traffic patterns across Azure resources)](https://aka.ms/aa_enableta_learnmore).
+
+## SQL Virtual Machine
+
+### SQL IaaS Agent should be installed in full mode
+
+Full mode installs the SQL IaaS Agent to the VM to deliver full functionality. Use it for managing a SQL Server VM with a single instance. There is no cost associated with using the full manageability mode. System administrator permissions are required. Note that installing or upgrading to full mode is an online operation, there is no restart required.
+
+Learn more about [SQL virtual machine - UpgradeToFullMode (SQL IaaS Agent should be installed in full mode)](/azure/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management?tabs=azure-powershell).
+
+## Storage
+
+### Prevent hitting subscription limit for maximum storage accounts
+
+A region can support a maximum of 250 storage accounts per subscription. You have either already reached or are about to reach that limit. If you reach that limit, you will be unable to create any more storage accounts in that subscription/region combination. Please evaluate the recommended action below to avoid hitting the limit.
+
+Learn more about [Storage Account - StorageAccountScaleTarget (Prevent hitting subscription limit for maximum storage accounts)](https://aka.ms/subscalelimit).
+
+### Update to newer releases of the Storage Java v12 SDK for better reliability.
+
+We noticed that one or more of your applications use an older version of the Azure Storage Java v12 SDK to write data to Azure Storage. Unfortunately, the version of the SDK being used has a critical issue that uploads incorrect data during retries (for example, in case of HTTP 500 errors), resulting in an invalid object being written. The issue is fixed in newer releases of the Java v12 SDK.
+
+Learn more about [Storage Account - UpdateStorageJavaSDK (Update to newer releases of the Storage Java v12 SDK for better reliability.)](/azure/developer/java/sdk/?view=azure-java-stable&preserve-view=true).
+
+## Subscription
+
+### Set up staging environments in Azure App Service
+
+Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+
+Learn more about [Subscription - AzureApplicationService (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
+
+### Enforce 'Add or replace a tag on resources' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value when any resource is created or updated. Existing resources can be remediated by triggering a remediation task. Does not modify tags on resource groups.
+
+Learn more about [Subscription - AddTagPolicy (Enforce 'Add or replace a tag on resources' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Allowed locations' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to restrict the locations your organization can specify when deploying resources. Use to enforce your geo-compliance requirements.
+
+Learn more about [Subscription - AllowedLocationsPolicy (Enforce 'Allowed locations' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Audit VMs that do not use managed disks' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy audits VMs that do not use managed disks.
+
+Learn more about [Subscription - AuditForManagedDisksPolicy (Enforce 'Audit VMs that do not use managed disks' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Allowed virtual machine SKUs' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy enables you to specify a set of virtual machine SKUs that your organization can deploy.
+
+Learn more about [Subscription - AllowedVirtualMachineSkuPolicy (Enforce 'Allowed virtual machine SKUs' using Azure Policy)](/azure/governance/policy/overview).
+
+### Enforce 'Inherit a tag from the resource group' using Azure Policy
+
+Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce different rules and effects over your resources. This policy adds or replaces the specified tag and value from the parent resource group when any resource is created or updated. Existing resources can be remediated by triggering a remediation task.
+
+Learn more about [Subscription - InheritTagPolicy (Enforce 'Inherit a tag from the resource group' using Azure Policy)](/azure/governance/policy/overview).
+
+### Use Azure Lighthouse to simply and securely manage customer subscriptions at scale
+
+Using Azure Lighthouse improves security and reduces unnecessary access to your customer tenants by enabling more granular permissions for your users. It also allows for greater scalability, as your users can work across multiple customer subscriptions using a single login in your tenant.
+
+Learn more about [Subscription - OnboardCSPSubscriptionsToLighthouse (Use Azure Lighthouse to simply and securely manage customer subscriptions at scale)](/azure/lighthouse/concepts/cloud-solution-provider).
+
+## Web
+
+### Set up staging environments in Azure App Service
+
+Deploying an app to a slot first and swapping it into production makes sure that all instances of the slot are warmed up before being swapped into production. This eliminates downtime when you deploy your app. The traffic redirection is seamless, no requests are dropped because of swap operations.
+
+Learn more about [App service - AzureAppService-StagingEnv (Set up staging environments in Azure App Service)](/azure/app-service/deploy-staging-slots).
++
+## Next steps
+
+Learn more about [Operational Excellence - Microsoft Azure Well Architected Framework](/azure/architecture/framework/devops/overview)
advisor Advisor Reference Performance Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-performance-recommendations.md
+
+ Title: Performance recommendations
+description: Full list of available performance recommendations in Advisor.
+ Last updated : 02/03/2022++
+# Performance recommendations
+
+The performance recommendations in Azure Advisor can help improve the speed and responsiveness of your business-critical applications. You can get performance recommendations from Advisor on the **Performance** tab of the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Performance** tab.
++
+## Attestation
+
+### Update Attestation API Version
+
+We have identified API calls from outdated Attestation API for resources under this subscription. We recommend switching to the latest Attestation API versions. You need to update your existing code to use the latest API version. This ensures you receive the latest features and performance improvements.
+
+Learn more about [Attestation provider - UpgradeAttestationAPI (Update Attestation API Version)](/rest/api/attestation).
+
+## Azure VMware Solution
+
+### vSAN capacity utilization has crossed critical threshold
+
+Your vSAN capacity utilization has reached 75%. The cluster utilization is required to remain below the 75% critical threshold for SLA compliance. Add new nodes to VSphere cluster to increase capacity or delete VMs to reduce consumption or adjust VM workloads
+
+Learn more about [AVS Private cloud - vSANCapacity (vSAN capacity utilization has crossed critical threshold)](/azure/azure-vmware/concepts-private-clouds-clusters).
+
+## Azure Cache for Redis
+
+### Improve your Cache and application performance when running with high network bandwidth
+
+Cache instances perform best when not running under high network bandwidth which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce network bandwidth or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheNetworkBandwidth (Improve your Cache and application performance when running with high network bandwidth)](https://aka.ms/redis/recommendations/bandwidth).
+
+### Improve your Cache and application performance when running with many connected clients
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheConnectedClients (Improve your Cache and application performance when running with many connected clients)](https://aka.ms/redis/recommendations/connections).
+
+### Improve your Cache and application performance when running with high server load
+
+Cache instances perform best when not running under high server load which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce the server load or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheServerLoad (Improve your Cache and application performance when running with high server load)](https://aka.ms/redis/recommendations/cpu).
+
+### Improve your Cache and application performance when running with high memory pressure
+
+Cache instances perform best when not running under high memory pressure which may cause them to become unresponsive, experience data loss, or become unavailable. Apply best practices to reduce used memory or scale to a different size or sku with more capacity.
+
+Learn more about [Redis Cache Server - RedisCacheUsedMemory (Improve your Cache and application performance when running with high memory pressure)](https://aka.ms/redis/recommendations/memory).
+
+## Cognitive Service
+
+### Upgrade to the latest Cognitive Service Text Analytics API version
+
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have opinion mining in SA endpoint, redacted text property in PII endpoint
+
+Learn more about [Cognitive Service - UpgradeToLatestAPI (Upgrade to the latest Cognitive Service Text Analytics API version)](/azure/cognitive-services/text-analytics/how-tos/text-analytics-how-to-call-api).
+
+### Upgrade to the latest API version of Azure Cognitive Service for Language
+
+Upgrade to the latest API version to get the best results in terms of model quality, performance and service availability.
+
+Learn more about [Cognitive Service - UpgradeToLatestAPILanguage (Upgrade to the latest API version of Azure Cognitive Service for Language)](https://aka.ms/language-api).
+
+### Upgrade to the latest Cognitive Service Text Analytics SDK version
+
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability. Also there are new features available as new endpoints starting from V3.0 such as PII recognition, Entity recognition and entity linking available as separate endpoints. In terms of changes in preview endpoints we have Opinion Mining in SA endpoint, redacted text property in PII endpoint
+
+Learn more about [Cognitive Service - UpgradeToLatestSDK (Upgrade to the latest Cognitive Service Text Analytics SDK version)](/azure/cognitive-services/text-analytics/quickstarts/text-analytics-sdk?tabs=version-3-1&pivots=programming-language-csharp).
+
+### Upgrade to the latest Cognitive Service Language SDK version
+
+Upgrade to the latest SDK version to get the best results in terms of model quality, performance and service availability.
+
+Learn more about [Cognitive Service - UpgradeToLatestSDKLanguage (Upgrade to the latest Cognitive Service Language SDK version)](https://aka.ms/language-api).
+
+## Communication services
+
+### Use recommended version of Chat SDK
+
+Azure Communication Services Chat SDK can be used to add rich, real-time chat to your applications. Update to the recommended version of Chat SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeChatSdk (Use recommended version of Chat SDK)](/azure/communication-services/concepts/chat/sdk-features).
+
+### Use recommended version of Resource Manager SDK
+
+Resource Manager SDK can be used to provision and manage Azure Communication Services resources. Update to the recommended version of Resource Manager SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeResourceManagerSdk (Use recommended version of Resource Manager SDK)](/azure/communication-services/quickstarts/create-communication-resource?tabs=windows&pivots=platform-net).
+
+### Use recommended version of Identity SDK
+
+Azure Communication Services Identity SDK can be used to manage identities, users, and access tokens. Update to the recommended version of Identity SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeIdentitySdk (Use recommended version of Identity SDK)](/azure/communication-services/concepts/sdk-options).
+
+### Use recommended version of SMS SDK
+
+Azure Communication Services SMS SDK can be used to send and receive SMS messages. Update to the recommended version of SMS SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeSmsSdk (Use recommended version of SMS SDK)](/azure/communication-services/concepts/telephony-sms/sdk-features).
+
+### Use recommended version of Phone Numbers SDK
+
+Azure Communication Services Phone Numbers SDK can be used to acquire and manage phone numbers. Update to the recommended version of Phone Numbers SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradePhoneNumbersSdk (Use recommended version of Phone Numbers SDK)](/azure/communication-services/concepts/sdk-options).
+
+### Use recommended version of Calling SDK
+
+Azure Communication Services Calling SDK can be used to enable voice, video, screen-sharing, and other real-time communication. Update to the recommended version of Calling SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeCallingSdk (Use recommended version of Calling SDK)](/azure/communication-services/concepts/voice-video-calling/calling-sdk-features).
+
+### Use recommended version of Call Automation SDK
+
+Azure Communication Services Call Automation SDK can be used to make and manage calls, play audio, and configure recording. Update to the recommended version of Call Automation SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeServerCallingSdk (Use recommended version of Call Automation SDK)](/azure/communication-services/concepts/voice-video-calling/call-automation-apis).
+
+### Use recommended version of Network Traversal SDK
+
+Azure Communication Services Network Traversal SDK can be used to access TURN servers for low-level data transport. Update to the recommended version of Network Traversal SDK to ensure the latest fixes and features.
+
+Learn more about [Communication service - UpgradeTurnSdk (Use recommended version of Network Traversal SDK)](/azure/communication-services/concepts/sdk-options).
+
+## Compute
+
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+
+We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD.
+
+Learn more about [Virtual machine - RegionProximitySessionHosts (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+
+### Consider increasing the size of your NVA to address persistent high CPU
+
+When NVAs run at high CPU, packets can get dropped resulting in connection failures or high latency due to network retransmits. Your NVA is running at high CPU, so you should consider increasing the VM size as allowed by the NVA vendor's licensing requirements.
+
+Learn more about [Virtual machine - NVAHighCPU (Consider increasing the size of your NVA to address persistent high CPU)](https://aka.ms/NVAHighCPU).
+
+### Use Managed disks to prevent disk I/O throttling
+
+Your virtual machine disks belong to a storage account that has reached its scalability target, and is susceptible to I/O throttling. To protect your virtual machine from performance degradation and to simplify storage management, use Managed Disks.
+
+Learn more about [Virtual machine - ManagedDisksStorageAccount (Use Managed disks to prevent disk I/O throttling)](https://aka.ms/aa_avset_manageddisk_learnmore).
+
+### Convert Managed Disks from Standard HDD to Premium SSD for performance
+
+We have noticed your Standard HDD disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Disk - MDHDDtoPremiumForPerformance (Convert Managed Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+
+### Enable Accelerated Networking to improve network performance and latency
+
+We have detected that Accelerated Networking is not enabled on VM resources in your existing deployment that may be capable of supporting this feature. If your VM OS image supports Accelerated Networking as detailed in the documentation, make sure to enable this free feature on these VMs to maximize the performance and latency of your networking workloads in cloud
+
+Learn more about [Virtual machine - AccelNetConfiguration (Enable Accelerated Networking to improve network performance and latency)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Use SSD Disks for your production workloads
+
+We noticed that you are using SSD disks while also using Standard HDD disks on the same VM. Standard HDD managed disks are generally recommended for dev-test and backup; we recommend you use Premium SSDs or Standard SSDs for production. Premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Standard SSDs provide consistent and lower latency. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Virtual machine - MixedDiskTypeToSSDPublic (Use SSD Disks for your production workloads)](/azure/virtual-machines/windows/disks-types#disk-comparison).
+
+### Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Barracuda Networks NextGen Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Barracuda Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - BarracudaNVAAccelNet (Barracuda Networks NextGen Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Arista Networks vEOS Router Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Arista Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - AristaNVAAccelNet (Arista Networks vEOS Router may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Cisco Cloud Services Router 1000V Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Cisco for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - CiscoCSRNVAAccelNet (Cisco Cloud Services Router 1000V may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of Palo Alto Networks VM-Series Firewall Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact Palo Alto Networks for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - PaloAltoNVAAccelNet (Palo Alto Networks VM-Series Firewall may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.
+
+We have identified that your Virtual Machine might be running a version of NetApp Cloud Volumes ONTAP Image that is running older drivers for Accelerated Networking, which may cause the product to revert to using the standard, synthetic network interface which does not use Accelerated Networking. It is recommended that you upgrade to a newer version of the image that addresses this issue and enable Accelerated Networking. Contact NetApp for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - NetAppNVAAccelNet (NetApp Cloud Volumes ONTAP may experience high CPU utilization, reduced throughput and high latency.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Match production Virtual Machines with Production Disk for consistent performance and better latency
+
+Production virtual machines need production disks if you want to get the best performance. We see that you are running a production level virtual machine, however, you are using a low performing disk with standard HDD. Upgrading your disks that are attached to your production disks, either Standard SSD or Premium SSD, will benefit you with a more consistent experience and improvements in latency.
+
+Learn more about [Virtual machine - MatchProdVMProdDisks (Match production Virtual Machines with Production Disk for consistent performance and better latency)](/azure/virtual-machines/windows/disks-types#disk-comparison).
+
+### Update to the latest version of your Arista VEOS product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - AristaVeosANUpgradeRecommendation (Update to the latest version of your Arista VEOS product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - BarracudaNgANUpgradeRecommendation (Update to the latest version of your Barracuda NG Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - Cisco1000vANUpgradeRecommendation (Update to the latest version of your Cisco Cloud Services Router 1000V product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your F5 BigIp product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - F5BigIpANUpgradeRecommendation (Update to the latest version of your F5 BigIp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your NetApp product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - NetAppANUpgradeRecommendation (Update to the latest version of your NetApp product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.
+
+We have identified that your Virtual Machine might be running a version of software image that is running older drivers for Accelerated Networking (AN). It has a synthetic network interface which, either, is not AN capable or is not compatible with all Azure hardware. It is recommended that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - PaloAltoFWANUpgradeRecommendation (Update to the latest version of your Palo Alto Firewall product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Update to the latest version of your Check Point product for Accelerated Networking support.
+
+We have identified that your Virtual Machine (VM) might be running a version of software image that is running older drivers for Accelerated Networking (AN). Your VM has a synthetic network interface that is either not AN capable or is not compatible with all Azure hardware. We recommend that you upgrade to the latest version of the image that addresses this issue and enable Accelerated Networking. Contact your vendor for further instructions on how to upgrade your Network Virtual Appliance Image.
+
+Learn more about [Virtual machine - CheckPointCGANUpgradeRecommendation (Update to the latest version of your Check Point product for Accelerated Networking support.)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### Accelerated Networking may require stopping and starting the VM
+
+We have detected that Accelerated Networking is not engaged on VM resources in your existing deployment even though the feature has been requested. In rare cases like this, it may be necessary to stop and start your VM, at your convenience, to re-engage AccelNet.
+
+Learn more about [Virtual machine - AccelNetDisengaged (Accelerated Networking may require stopping and starting the VM)](/azure/virtual-network/create-vm-accelerated-networking-cli#enable-accelerated-networking-on-existing-vms).
+
+### NVA may see traffic loss due to hitting the maximum number of flows.
+
+Packet loss has been observed for this Virtual Machine due to hitting or exceeding the maximum number of flows for a VM instance of this size on Azure
+
+Learn more about [Virtual machine - NvaMaxFlowLimit (NVA may see traffic loss due to hitting the maximum number of flows.)](/azure/virtual-network/virtual-machine-network-throughput).
+
+### Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.
+
+Ultra disk is available in the same region as your database workload. Ultra disk offers high throughput, high IOPS, and consistent low latency disk storage for your database workloads: For Oracle DBs, you can now use either 4k or 512E sector sizes with Ultra disk depending on your Oracle DB version. For SQL server, leveraging Ultra disk for your log disk might offer more performance for your database. See instructions here for migrating your log disk to Ultra disk.
+
+Learn more about [Virtual machine - AzureStorageVmUltraDisk (Take advantage of Ultra Disk low latency for your log disks and improve your database workload performance.)](/azure/virtual-machines/disks-enable-ultra-ssd?tabs=azure-portal).
+
+## Kubernetes
+
+### Unsupported Kubernetes version is detected
+
+Unsupported Kubernetes version is detected. Ensure Kubernetes cluster runs with a supported version.
+
+Learn more about [Kubernetes service - UnsupportedKubernetesVersionIsDetected (Unsupported Kubernetes version is detected)](https://aka.ms/aks-supported-versions).
+
+## Data Factory
+
+### Review your throttled Data Factory Triggers
+
+A high volume of throttling has been detected in an event-based trigger that runs in your Data Factory resource. This is causing your pipeline runs to drop from the run queue. Review the trigger definition to resolve issues and increase performance.
+
+Learn more about [Data factory trigger - ADFThrottledTriggers (Review your throttled Data Factory Triggers)](https://aka.ms/adf-create-event-trigger).
+
+## MariaDB
+
+### Scale the storage limit for MariaDB server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [MariaDB server - OrcasMariaDbStorageLimit (Scale the storage limit for MariaDB server)](https://aka.ms/mariadbstoragelimits).
+
+### Increase the MariaDB server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [MariaDB server - OrcasMariaDbCpuOverlaod (Increase the MariaDB server vCores)](https://aka.ms/mariadbpricing).
+
+### Scale the MariaDB server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [MariaDB server - OrcasMariaDbConcurrentConnection (Scale the MariaDB server to higher SKU)](https://aka.ms/mariadbconnectionlimits).
+
+### Move your MariaDB server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [MariaDB server - OrcasMariaDbMemoryCache (Move your MariaDB server to Memory Optimized SKU)](https://aka.ms/mariadbpricing).
+
+### Increase the reliability of audit logs
+
+Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+
+Learn more about [MariaDB server - OrcasMariaDBAuditLog (Increase the reliability of audit logs)](https://aka.ms/mariadb-audit-logs).
+
+## MySQL
+
+### Scale the storage limit for MySQL server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [MySQL server - OrcasMySQLStorageLimit (Scale the storage limit for MySQL server)](https://aka.ms/mysqlstoragelimits).
+
+### Scale the MySQL server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to a higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [MySQL server - OrcasMySQLConcurrentConnection (Scale the MySQL server to higher SKU)](https://aka.ms/mysqlconnectionlimits).
+
+### Increase the MySQL server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [MySQL server - OrcasMySQLCpuOverload (Increase the MySQL server vCores)](https://aka.ms/mysqlpricing).
+
+### Move your MySQL server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [MySQL server - OrcasMySQLMemoryCache (Move your MySQL server to Memory Optimized SKU)](https://aka.ms/mysqlpricing).
+
+### Add a MySQL Read Replica server
+
+Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+
+Learn more about [MySQL server - OrcasMySQLReadReplica (Add a MySQL Read Replica server)](https://aka.ms/mysqlreadreplica).
+
+### Improve MySQL connection management
+
+Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as ProxySQL.
+
+Learn more about [MySQL server - OrcasMySQLConnectionPooling (Improve MySQL connection management)](https://aka.ms/azure_mysql_connection_pooling).
+
+### Increase the reliability of audit logs
+
+Our internal telemetry shows that the server's audit logs may have been lost over the past day. This can occur when your server is experiencing a CPU heavy workload or a server generates a large number of audit logs over a short period of time. We recommend only logging the necessary events required for your audit purposes using the following server parameters: audit_log_events, audit_log_exclude_users, audit_log_include_users. If the CPU usage on your server is high due to your workload, we recommend increasing the server's vCores to improve performance.
+
+Learn more about [MySQL server - OrcasMySQLAuditLog (Increase the reliability of audit logs)](https://aka.ms/mysql-audit-logs).
+
+### Improve performance by optimizing MySQL temporary-table sizing
+
+Our internal telemetry indicates that your MySQL server may be incurring unnecessary I/O overhead due to low temporary-table parameter settings. This may result in unnecessary disk-based transactions and reduced performance. We recommend that you increase the 'tmp_table_size' and 'max_heap_table_size' parameter values to reduce the number of disk-based transactions.
+
+Learn more about [MySQL server - OrcasMySqlTmpTables (Improve performance by optimizing MySQL temporary-table sizing)](https://aka.ms/azure_mysql_tmp_table).
+
+### Improve MySQL connection latency
+
+Our internal telemetry indicates that your application connecting to MySQL server may not be managing connections efficiently. This may result in higher application latency. To improve connection latency, we recommend that you enable connection redirection. This can be done by enabling the connection redirection feature of the PHP driver.
+
+Learn more about [MySQL server - OrcasMySQLConnectionRedirection (Improve MySQL connection latency)](https://aka.ms/azure_mysql_connection_redirection).
+
+## PostgreSQL
+
+### Scale the storage limit for PostgreSQL server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount or turning ON the "Auto-Growth" feature for automatic storage increases
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlStorageLimit (Scale the storage limit for PostgreSQL server)](https://aka.ms/postgresqlstoragelimits).
+
+### Increase the work_mem to avoid excessive disk spilling from sort and hash
+
+Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+
+### Distribute data in server group to distribute workload among nodes
+
+It looks like the data has not been distributed in this server group but stays on the coordinator. For full Hyperscale (Citus) benefits distribute data on worker nodes in this server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusDistributeData (Distribute data in server group to distribute workload among nodes)](https://go.microsoft.com/fwlink/?linkid=2135201).
+
+### Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly
+
+It looks like the data is not well balanced between worker nodes in this Hyperscale (Citus) server group. In order to use each worker node of the Hyperscale (Citus) server group effectively rebalance data in this server group.
+
+Learn more about [Hyperscale (Citus) server group - OrcasPostgreSqlCitusRebalanceData (Rebalance data in Hyperscale (Citus) server group to distribute workload among worker nodes more evenly)](https://go.microsoft.com/fwlink/?linkid=2148869).
+
+### Scale the PostgreSQL server to higher SKU
+
+Our internal telemetry shows that the server may be unable to support the connection requests because of the maximum supported connections for the given SKU. This may result in a large number of failed connections requests which adversely affect the performance. To improve performance, we recommend moving to higher memory SKU by increasing vCore or switching to Memory-Optimized SKUs.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlConcurrentConnection (Scale the PostgreSQL server to higher SKU)](https://aka.ms/postgresqlconnectionlimits).
+
+### Move your PostgreSQL server to Memory Optimized SKU
+
+Our internal telemetry shows that there is high churn in the buffer pool for this server which can result in slower query performance and increased IOPS. To improve performance, review your workload queries to identify opportunities to minimize memory consumed. If no such opportunity found, then we recommend moving to higher SKU with more memory or increase storage size to get more IOPS.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlMemoryCache (Move your PostgreSQL server to Memory Optimized SKU)](https://aka.ms/postgresqlpricing).
+
+### Add a PostgreSQL Read Replica server
+
+Our internal telemetry shows that you may have a read intensive workload running, which results in resource contention for this server. This may lead to slow query performance for the server. To improve performance, we recommend you add a read replica, and offload some of your read workloads to the replica.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlReadReplica (Add a PostgreSQL Read Replica server)](https://aka.ms/postgresqlreadreplica).
+
+### Increase the PostgreSQL server vCores
+
+Our internal telemetry shows that the CPU has been running under high utilization for an extended period of time over the last 7 days. High CPU utilization may lead to slow query performance. To improve performance, we recommend moving to a larger compute size.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCpuOverload (Increase the PostgreSQL server vCores)](https://aka.ms/postgresqlpricing).
+
+### Improve PostgreSQL connection management
+
+Our internal telemetry indicates that your PostgreSQL server may not be managing connections efficiently. This may result in unnecessary resource consumption and overall higher application latency. To improve connection management, we recommend that you reduce the number of short-lived connections and eliminate unnecessary idle connections. This can be done by configuring a server side connection-pooler, such as PgBouncer.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlConnectionPooling (Improve PostgreSQL connection management)](https://aka.ms/azure_postgresql_connection_pooling).
+
+### Improve PostgreSQL log performance
+
+Our internal telemetry indicates that your PostgreSQL server has been configured to output VERBOSE error logs. This can be useful for troubleshooting your database, but it can also result in reduced database performance. To improve performance, we recommend that you change the log_error_verbosity parameter to the DEFAULT setting.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogErrorVerbosity (Improve PostgreSQL log performance)](https://aka.ms/azure_postgresql_log_settings).
+
+### Optimize query statistics collection on an Azure Database for PostgreSQL
+
+Our internal telemetry indicates that your PostgreSQL server has been configured to track query statistics using the pg_stat_statements module. While useful for troubleshooting, it can also result in reduced server performance. To improve performance, we recommend that you change the pg_stat_statements.track parameter to NONE.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlStatStatementsTrack (Optimize query statistics collection on an Azure Database for PostgreSQL)](https://aka.ms/azure_postgresql_optimize_query_stats).
+
+### Optimize query store on an Azure Database for PostgreSQL when not troubleshooting
+
+Our internal telemetry indicates that your PostgreSQL database has been configured to track query performance using the pg_qs.query_capture_mode parameter. While troubleshooting, we suggest setting the pg_qs.query_capture_mode parameter to TOP or ALL. When not troubleshooting, we recommend that you set the pg_qs.query_capture_mode parameter to NONE.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlQueryCaptureMode (Optimize query store on an Azure Database for PostgreSQL when not troubleshooting)](https://aka.ms/azure_postgresql_query_store).
+
+### Increase the storage limit for PostgreSQL Flexible Server
+
+Our internal telemetry shows that the server may be constrained because it is approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned storage amount.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlFlexibleServerStorageLimit (Increase the storage limit for PostgreSQL Flexible Server)](https://aka.ms/azure_postgresql_flexible_server_limits).
+
+### Optimize logging settings by setting LoggingCollector to -1
+
+Optimize logging settings by setting LoggingCollector to -1
+
+### Optimize logging settings by setting LogDuration to OFF
+
+Optimize logging settings by setting LogDuration to OFF
+
+### Optimize logging settings by setting LogStatement to NONE
+
+Optimize logging settings by setting LogStatement to NONE
+
+### Optimize logging settings by setting ReplaceParameter to OFF
+
+Optimize logging settings by setting ReplaceParameter to OFF
+
+### Optimize logging settings by setting LoggingCollector to OFF
+
+Optimize logging settings by setting LoggingCollector to OFF
+
+### Increase the storage limit for Hyperscale (Citus) server group
+
+Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlCitusStorageLimitHyperscaleCitus (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+
+### Optimize log_statement settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_statement enabled, for better performance, set it to NONE
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogStatement (Optimize log_statement settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Increase the work_mem to avoid excessive disk spilling from sort and hash
+
+Our internal telemetry shows that the configuration work_mem is too small for your PostgreSQL server which is resulting in disk spilling and degraded query performance. To improve this, we recommend increasing the work_mem limit for the server which will help to reduce the scenarios when the sort or hash happens on disk, thereby improving the overall query performance.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruWorkMem (Increase the work_mem to avoid excessive disk spilling from sort and hash)](https://aka.ms/runtimeconfiguration).
+
+### Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning
+
+Our internal telemetry suggests that you can improve storage performance by enabling Intelligent tuning
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruIntelligentTuning (Improve PostgreSQL - Flexible Server performance by enabling Intelligent tuning)](/azure/postgresql/flexible-server/concepts-intelligent-tuning).
+
+### Optimize log_duration settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_duration enabled, for better performance, set it to OFF
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogDuration (Optimize log_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Optimize log_min_duration settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_min_duration enabled, for better performance, set it to -1
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogMinDuration (Optimize log_min_duration settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have pg_qs.query_capture_mode enabled, for better performance, set it to NONE
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruQueryCaptureMode (Optimize pg_qs.query_capture_mode settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-query-store-best-practices).
+
+### Optimize PostgreSQL performance by enabling PGBouncer
+
+Our Internal telemetry indicates that you can improve PostgreSQL performance by enabling PGBouncer
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruOrcasPostgreSQLConnectionPooling (Optimize PostgreSQL performance by enabling PGBouncer)](/azure/postgresql/flexible-server/concepts-pgbouncer).
+
+### Optimize log_error_verbosity settings for PostgreSQL on Azure Database
+
+Our internal telemetry indicates that you have log_error_verbosity enabled, for better performance, set it to DEFAULT
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasMeruMeruLogErrorVerbosity (Optimize log_error_verbosity settings for PostgreSQL on Azure Database)](/azure/postgresql/flexible-server/concepts-logging).
+
+### Increase the storage limit for Hyperscale (Citus) server group
+
+Our internal telemetry shows that one or more nodes in the server group may be constrained because they are approaching limits for the currently provisioned storage values. This may result in degraded performance or in the server being moved to read-only mode. To ensure continued performance, we recommend increasing the provisioned disk space.
+
+Learn more about [Hyperscale (Citus) server group - MarlinStorageLimitRecommendation (Increase the storage limit for Hyperscale (Citus) server group)](/azure/postgresql/howto-hyperscale-scale-grow#increase-storage-on-nodes).
+
+### Migrate your database from SSPG to FSPG
+
+Consider our new offering Azure Database for PostgreSQL Flexible Server that provides richer capabilities such as zone resilient HA, predictable performance, maximum control, custom maintenance window, cost optimization controls and simplified developer experience. Learn more.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlMeruMigration (Migrate your database from SSPG to FSPG)](https://aka.ms/sspg-upgrade).
+
+## Desktop Virtualization
+
+### Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.
+
+We have determined that your VMs are located in a region different or far from where your users are connecting from, using Windows Virtual Desktop (WVD). This may lead to prolonged connection response times and will impact overall user experience on WVD. When creating VMs for your host pools, you should attempt to use a region closer to the user. Having close proximity ensures continuing satisfaction with the WVD service and a better overall quality of experience.
+
+Learn more about [Host Pool - RegionProximityHostPools (Improve user experience and connectivity by deploying VMs closer to userΓÇÖs location.)](/azure/virtual-desktop/connection-latency).
+
+### Change the max session limit for your depth first load balanced host pool to improve VM performance
+
+Depth first load balancing uses the max session limit to determine the maximum number of users that can have concurrent sessions on a single session host. If the max session limit is too high, all user sessions will be directed to the same session host and this may cause performance and reliability issues. Therefore, when setting a host pool to have depth first load balancing, you should also set an appropriate max session limit according to the configuration of your deployment and capacity of your VMs. To fix this, open your host pool's properties and change the value next to the "Max session limit" setting.
+
+Learn more about [Host Pool - ChangeMaxSessionLimitForDepthFirstHostPool (Change the max session limit for your depth first load balanced host pool to improve VM performance )](/azure/virtual-desktop/configure-host-pool-load-balancing).
+
+## Cosmos DB
+
+### Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK
+
+We noticed that your Azure Cosmos DB applications are using Gateway mode via the Cosmos DB .NET or Java SDKs. We recommend switching to Direct connectivity for lower latency and higher scalability.
+
+Learn more about [Cosmos DB account - CosmosDBGatewayMode (Configure your Azure Cosmos DB applications to use Direct connectivity in the SDK)](/azure/cosmos-db/performance-tips#networking).
+
+### Configure your Azure Cosmos DB query page size (MaxItemCount) to -1
+
+You are using the query page size of 100 for queries for your Azure Cosmos container. We recommend using a page size of -1 for faster scans.
+
+Learn more about [Cosmos DB account - CosmosDBQueryPageSize (Configure your Azure Cosmos DB query page size (MaxItemCount) to -1)](/azure/cosmos-db/sql-api-query-metrics#max-item-count).
+
+### Add composite indexes to your Azure Cosmos DB container
+
+Your Azure Cosmos DB containers are running ORDER BY queries incurring high Request Unit (RU) charges. It is recommended to add composite indexes to your containers' indexing policy to improve the RU consumption and decrease the latency of these queries.
+
+Learn more about [Cosmos DB account - CosmosDBOrderByHighRUCharge (Add composite indexes to your Azure Cosmos DB container)](/azure/cosmos-db/index-policy#composite-indexes).
+
+### Optimize your Azure Cosmos DB indexing policy to only index what's needed
+
+Your Azure Cosmos DB containers are using the default indexing policy, which indexes every property in your documents. Because you're storing large documents, a high number of properties get indexed, resulting in high Request Unit consumption and poor write latency. To optimize write performance, we recommend overriding the default indexing policy to only index the properties used in your queries.
+
+Learn more about [Cosmos DB account - CosmosDBDefaultIndexingWithManyPaths (Optimize your Azure Cosmos DB indexing policy to only index what's needed)](/azure/cosmos-db/index-policy).
+
+### Use hierarchical partition keys for optimal data distribution
+
+This account has a custom setting that allows the logical partition size in a container to exceed the limit of 20 GB. This setting was applied by the Azure Cosmos DB team as a temporary measure to give you time to re-architect your application with a different partition key. It is not recommended as a long-term solution, as SLA guarantees are not honored when the limit is increased. You can now use hierarchical partition keys (preview) to re-architect your application. The feature allows you to exceed the 20 GB limit by setting up to three partition keys, ideal for multi-tenant scenarios or workloads that use synthetic keys.
+
+Learn more about [Cosmos DB account - CosmosDBHierarchicalPartitionKey (Use hierarchical partition keys for optimal data distribution)](https://devblogs.microsoft.com/cosmosdb/hierarchical-partition-keys-private-preview/).
+
+## HDInsight
+
+### Reads happen on most recent data
+
+More than 75% of your read requests are landing on the memstore. That indicates that the reads are primarily on recent data. This suggests that even if a flush happens on the memstore, the recent file needs to be accessed and that file needs to be in the cache.
+
+Learn more about [HDInsight cluster - HBaseMemstoreReadPercentage (Reads happen on most recent data)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.
+
+You are seeing this advisor recommendation because HDInsight team's system log shows that in the past 7 days, your cluster has encountered the following scenarios:
+ 1. High WAL sync time latency
+ 2. High write request count (at least 3 one hour windows of over 1000 avg_write_requests/second/node)
+
+These conditions are indicators that your cluster is suffering from high write latencies. This could be due to heavy workload performed on your cluster.
+To improve the performance of your cluster, you may want to consider utilizing the Accelerated Writes feature provided by Azure HDInsight HBase. The Accelerated Writes feature for HDInsight Apache HBase clusters attaches premium SSD-managed disks to every RegionServer (worker node) instead of using cloud storage. As a result, provides low write-latency and better resiliency for your applications.
+Learn more about [HDInsight cluster - AccWriteCandidate (Consider using Accelerated Writes feature in your HBase cluster to improve cluster performance.)](/azure/hdinsight/hbase/apache-hbase-accelerated-writes).
+
+### More than 75% of your queries are full scan queries.
+
+More than 75% of the scan queries on your cluster are doing a full region/table scan. Modify your scan queries to avoid full region or table scans.
+
+Learn more about [HDInsight cluster - ScanQueryTuningcandidate (More than 75% of your queries are full scan queries.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Check your region counts as you have blocking updates.
+
+Region counts needs to be adjusted to avoid updates getting blocked. It might require a scale up of the cluster by adding new nodes.
+
+Learn more about [HDInsight cluster - RegionCountCandidate (Check your region counts as you have blocking updates.)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider increasing the flusher threads
+
+The flush queue size in your region servers is more than 100 or there are updates getting blocked frequently. Tuning of the flush handler is recommended.
+
+Learn more about [HDInsight cluster - FlushQueueCandidate (Consider increasing the flusher threads)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+### Consider increasing your compaction threads for compactions to complete faster
+
+The compaction queue in your region servers is more than 2000 suggesting that more data requires compaction. Slower compactions can impact read performance as the number of files to read are more. More files without compaction can also impact the heap usage related to how files interact with Azure file system.
+
+Learn more about [HDInsight cluster - CompactionQueueCandidate (Consider increasing your compaction threads for compactions to complete faster)](/azure/hdinsight/hbase/apache-hbase-advisor).
+
+## Key Vault
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.<br><br>**PLEASE DISMISS:**<br>If Key Vault is integrated with Azure Storage, Disk or other Azure services which can use old Key Vault SDK and when all your current custom applications are using .NET SDK 4.0 or above.
+
+Learn more about [Key vault - UpgradeKeyVaultSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+
+### Update Key Vault SDK Version
+
+New Key Vault Client Libraries are split to keys, secrets, and certificates SDKs, which are integrated with recommended Azure Identity library to provide seamless authentication to Key Vault across all languages and environments. It also contains several performance fixes to issues reported by customers and proactively identified through our QA process.
+
+> [!IMPORTANT]
+> Please be aware that you can only remediate recommendation for custom applications you have access to. Recommendations can be shown due to integration with other Azure services like Storage, Disk encryption, which are in process to update to new version of our SDK. If you use .NET 4.0 in all your applications please dismiss.
+
+Learn more about [Managed HSM Service - UpgradeKeyVaultMHSMSDK (Update Key Vault SDK Version)](/azure/key-vault/general/client-libraries).
+
+## Data Exporer
+
+### Right-size Data Explorer resources for optimal performance.
+
+This recommendation surfaces all Data Explorer resources which exceed the recommended data capacity (80%). The recommended action to improve the performance is to scale to the recommended configuration shown.
+
+Learn more about [Data explorer resource - Right-size ADX resource (Right-size Data Explorer resources for optimal performance.)](https://aka.ms/adxskuperformance).
+
+### Review table cache policies for Data Explorer tables
+
+This recommendation surfaces Data Explorer tables with a high number of queries that look back beyond the configured cache period (policy). (You'll see the top 10 tables by query percentage that access out-of-cache data). The recommended action to improve the performance: Limit queries on this table to the minimal necessary time range (within the defined policy). Alternatively, if data from the entire time range is required, increase the cache period to the recommended value.
+
+Learn more about [Data explorer resource - UpdateCachePoliciesForAdxTables (Review table cache policies for Data Explorer tables)](https://aka.ms/adxcachepolicy).
+
+### Reduce Data Explorer table cache policy for better performance
+
+Reducing the table cache policy will free up unused data from the resource's cache and improve performance.
+
+Learn more about [Data explorer resource - ReduceCacheForAzureDataExplorerTablesToImprovePerformance (Reduce Data Explorer table cache policy for better performance)](https://aka.ms/adxcachepolicy).
+
+## Networking
+
+### Configure DNS Time to Live to 20 seconds
+
+Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 20 seconds to route traffic to a health endpoint as quickly as possible.
+
+Learn more about [Traffic Manager profile - FastFailOverTTL (Configure DNS Time to Live to 20 seconds)](https://aka.ms/Ngfw4r).
+
+### Configure DNS Time to Live to 60 seconds
+
+Time to Live (TTL) affects how recent of a response a client will get when it makes a request to Azure Traffic Manager. Reducing the TTL value means that the client will be routed to a functioning endpoint faster in the case of a failover. Configure your TTL to 60 seconds to route traffic to a health endpoint as quickly as possible.
+
+Learn more about [Traffic Manager profile - ProfileTTL (Configure DNS Time to Live to 60 seconds)](https://aka.ms/Um3xr5).
+
+### Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs
+
+You have been using over 90% of your procured circuit bandwidth recently. If you exceed your allocated bandwidth, you will experience an increase in dropped packets sent over ExpressRoute. Upgrade your circuit bandwidth to maintain performance if your bandwidth needs remain this high.
+
+Learn more about [ExpressRoute circuit - UpgradeERCircuitBandwidth (Upgrade your ExpressRoute circuit bandwidth to accommodate your bandwidth needs)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+
+### Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use
+
+Under high traffic load, the VPN gateway may drop packets due to high CPU.
+
+Learn more about [Virtual network gateway - HighCPUVNetGateway (Consider increasing the size of your VNet Gateway SKU to address consistently high CPU use)](https://aka.ms/HighCPUP2SVNetGateway).
+
+### Consider increasing the size of your VNet Gateway SKU to address high P2S use
+
+Each gateway SKU can only support a specified count of concurrent P2S connections. Your connection count is close to your gateway limit, so additional connection attempts may fail.
+
+Learn more about [Virtual network gateway - HighP2SConnectionsVNetGateway (Consider increasing the size of your VNet Gateway SKU to address high P2S use)](https://aka.ms/HighP2SConnectionsVNetGateway).
+
+### Make sure you have enough instances in your Application Gateway to support your traffic
+
+Your Application Gateway has been running on high utilization recently and under heavy load, you may experience traffic loss or increase in latency. It is important that you scale your Application Gateway according to your traffic and with a bit of a buffer so that you are prepared for any traffic surges or spikes and minimizing the impact that it may have in your QoS. Application Gateway v1 SKU (Standard/WAF) supports manual scaling and v2 SKU (Standard_v2/WAF_v2) support manual and autoscaling. In case of manual scaling, increase your instance count and if autoscaling is enabled, make sure your maximum instance count is set to a higher value so Application Gateway can scale out as the traffic increases
+
+Learn more about [Application gateway - HotAppGateway (Make sure you have enough instances in your Application Gateway to support your traffic)](https://aka.ms/hotappgw).
+
+## SQL
+
+### Create statistics on table columns
+
+We have detected that you are missing table statistics which may be impacting query performance. The query optimizer uses statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - CreateTableStatisticsSqlDW (Create statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Remove data skew to increase query performance
+
+We have detected distribution data skew greater than 15%. This can cause costly performance bottlenecks.
+
+Learn more about [SQL data warehouse - DataSkewSqlDW (Remove data skew to increase query performance)](https://aka.ms/learnmoredataskew).
+
+### Update statistics on table columns
+
+We have detected that you do not have up-to-date table statistics which may be impacting query performance. The query optimizer uses up-to-date statistics to estimate the cardinality or number of rows in the query result which enables the query optimizer to create a high quality query plan.
+
+Learn more about [SQL data warehouse - UpdateTableStatisticsSqlDW (Update statistics on table columns)](https://aka.ms/learnmorestatistics).
+
+### Right-size overutilized SQL Databases
+
+We've analyzed the DTU consumption of your SQL Database over the past 14 days and identified SQL Databases with high usage. You can improve your database performance by right-sizing to the recommended SKU based on the 95th percentile of your everyday workload
+
+Learn more about [SQL database - sqlRightsizePerformance (Right-size overutilized SQL Databases)](https://aka.ms/SQLDBrecommendation).
+
+### Scale up to optimize cache utilization with SQL Data Warehouse
+
+We have detected that you had high cache used percentage with a low hit percentage. This indicates high cache eviction which can impact the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwIncreaseCacheCapacity (Scale up to optimize cache utilization with SQL Data Warehouse)](https://aka.ms/learnmoreadaptivecache).
+
+### Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse
+
+We have detected that you had high tempdb utilization which can impact the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReduceTempdbContention (Scale up or update resource class to reduce tempdb contention with SQL Data Warehouse)](https://aka.ms/learnmoretempdb).
+
+### Convert tables to replicated tables with SQL Data Warehouse
+
+We have detected that you may benefit from using replicated tables. When using replicated tables, this will avoid costly data movement operations and significantly increase the performance of your workload.
+
+Learn more about [SQL data warehouse - SqlDwReplicateTable (Convert tables to replicated tables with SQL Data Warehouse)](https://aka.ms/learnmorereplicatedtables).
+
+### Split staged files in the storage account to increase load performance
+
+We have detected that you can increase load throughput by splitting your compressed files that are staged in your storage account. A good rule of thumb is to split compressed files into 60 or more to maximize the parallelism of your load.
+
+Learn more about [SQL data warehouse - FileSplittingGuidance (Split staged files in the storage account to increase load performance)](https://aka.ms/learnmorefilesplit).
+
+### Increase batch size when loading to maximize load throughput, data compression, and query performance
+
+We have detected that you can increase load performance and throughput by increasing the batch size when loading into your database. You should consider using the COPY statement. If you are unable to use the COPY statement, consider increasing the batch size when using loading utilities such as the SQLBulkCopy API or BCP - a good rule of thumb is a batch size between 100K to 1M rows.
+
+Learn more about [SQL data warehouse - LoadBatchSizeGuidance (Increase batch size when loading to maximize load throughput, data compression, and query performance)](https://aka.ms/learnmoreincreasebatchsize).
+
+### Co-locate the storage account within the same region to minimize latency when loading
+
+We have detected that you are loading from a region that is different from your SQL pool. You should consider loading from a storage account that is within the same region as your SQL pool to minimize latency when loading data.
+
+Learn more about [SQL data warehouse - ColocateStorageAccount (Co-locate the storage account within the same region to minimize latency when loading)](https://aka.ms/learnmorestoragecolocation).
+
+## Storage
+
+### Use "Put Blob" for blobs smaller than 256 MB
+
+When writing a block blob that is 256 MB or less (64 MB for requests using REST versions before 2016-05-31), you can upload it in its entirety with a single write operation using "Put Blob". Based on your aggregated metrics, we believe your storage account's write operations can be optimized.
+
+Learn more about [Storage Account - StorageCallPutBlob (Use \"Put Blob\" for blobs smaller than 256 MB)](https://aka.ms/understandblockblobs).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+Learn more about [Storage Account - UpdateStorageDataMovementSDK (Upgrade your Storage Client Library to the latest version for better reliability and performance)](https://aka.ms/AA5wtca).
+
+### Upgrade to Standard SSD Disks for consistent and improved performance
+
+Because you are running IaaS virtual machine workloads on Standard HDD managed disks, we wanted to let you know that a Standard SSD disk option is now available for all Azure VM types. Standard SSD disks are a cost-effective storage option optimized for enterprise workloads that need consistent performance. Upgrade your disk configuration today for improved latency, reliability, and availability. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Storage Account - StandardSSDForNonPremVM (Upgrade to Standard SSD Disks for consistent and improved performance)](/azure/virtual-machines/windows/disks-types#standard-ssd).
+
+### Upgrade your Storage Client Library to the latest version for better reliability and performance
+
+The latest version of Storage Client Library/ SDK contains fixes to issues reported by customers and proactively identified through our QA process. The latest version also carries reliability and performance optimization in addition to new features that can improve your overall experience using Azure Storage.
+
+### Use premium performance block blob storage
+
+One or more of your storage accounts has a high transaction rate per GB of block blob data stored. Use premium performance block blob storage instead of standard performance storage for your workloads that require fast storage response times and/or high transaction rates and potentially save on storage costs.
+
+Learn more about [Storage Account - PremiumBlobStorageAccount (Use premium performance block blob storage)](https://aka.ms/usePremiumBlob).
+
+### Convert Unmanaged Disks from Standard HDD to Premium SSD for performance
+
+We have noticed your Unmanaged HDD Disk is approaching performance targets. Azure premium SSDs deliver high-performance and low-latency disk support for virtual machines with IO-intensive workloads. Give your disk performance a boost by upgrading your Standard HDD disk to Premium SSD disk. Upgrading requires a VM reboot, which will take three to five minutes.
+
+Learn more about [Storage Account - UMDHDDtoPremiumForPerformance (Convert Unmanaged Disks from Standard HDD to Premium SSD for performance)](/azure/virtual-machines/windows/disks-types#premium-ssd).
+
+### No Snapshots Detected
+
+We have observed that there are no snapshots of your file shares. This means you are not protected from accidental file deletion or file corruption. Please enable snapshots to protect your data. One way to do this is through Azure
+
+Learn more about [Storage Account - EnableSnapshots (No Snapshots Detected)](/azure/backup/azure-file-share-backup-overview).
+
+## Synapse
+
+### Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows
+
+Clustered columnstore tables are organized in data into segments. Having high segment quality is critical to achieving optimal query performance on a columnstore table. Segment quality can be measured by the number of rows in a compressed row group.
+
+Learn more about [Synapse workspace - SynapseCCIGuidance (Tables with Clustered Columnstore Indexes (CCI) with less than 60 million rows)](https://aka.ms/AzureSynapseCCIGuidance).
+
+### Update SynapseManagementClient SDK Version
+
+New SynapseManagementClient is using .NET SDK 4.0 or above.
+
+Learn more about [Synapse workspace - UpgradeSynapseManagementClientSDK (Update SynapseManagementClient SDK Version)](https://aka.ms/UpgradeSynapseManagementClientSDK).
+
+## Web
+
+### Move your App Service Plan to PremiumV2 for better performance
+
+Your app served more than 1000 requests per day for the past 3 days. Your app may benefit from the higher performance infrastructure available with the Premium V2 App Service tier. The Premium V2 tier features Dv2-series VMs with faster processors, SSD storage, and doubled memory-to-core ratio when compared to the previous instances. Learn more about upgrading to Premium V2 from our documentation.
+
+Learn more about [App service - AppServiceMoveToPremiumV2 (Move your App Service Plan to PremiumV2 for better performance)](https://aka.ms/ant-premiumv2).
+
+### Check outbound connections from your App Service resource
+
+Your app has opened too many TCP/IP socket connections. Exceeding ephemeral TCP/IP port connection limits can cause unexpected connectivity issues for your apps.
+
+Learn more about [App service - AppServiceOutboundConnections (Check outbound connections from your App Service resource)](https://aka.ms/antbc-socket).
++
+## Next steps
+
+Learn more about [Performance Efficiency - Microsoft Azure Well Architected Framework](/azure/architecture/framework/scalability/overview)
advisor Advisor Reference Reliability Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-reference-reliability-recommendations.md
+
+ Title: Reliability recommendations
+description: Full list of available reliability recommendations in Advisor.
+ Last updated : 02/04/2022++
+# Reliability recommendations
+
+Azure Advisor helps you ensure and improve the continuity of your business-critical applications. You can get reliability recommendations on the **Reliability** tab on the Advisor dashboard.
+
+1. Sign in to the [**Azure portal**](https://portal.azure.com).
+
+1. Search for and select [**Advisor**](https://aka.ms/azureadvisordashboard) from any page.
+
+1. On the **Advisor** dashboard, select the **Reliability** tab.
+
+## FarmBeats
+
+### Upgrade to the latest FarmBeats API version
+
+We have identified calls to a FarmBeats API version that is scheduled for deprecation. We recommend switching to the latest FarmBeats API version to ensure uninterrupted access to FarmBeats, latest features, and performance improvements.
+
+Learn more about [Azure FarmBeats - FarmBeatsApiVersion (Upgrade to the latest FarmBeats API version)](https://aka.ms/FarmBeatsPaaSAzureAdvisorFAQ).
+
+## API Management
+
+### Hostname certificate rotation failed
+
+API Management service failed to refresh hostname certificate from Key Vault. Ensure that certificate exists in Key Vault and API Management service identity is granted secret read access. Otherwise, API Management service will not be able to retrieve certificate updates from Key Vault, which may lead to the service using stale certificate and runtime API traffic being blocked as a result.
+
+Learn more about [Api Management - HostnameCertRotationFail (Hostname certificate rotation failed)](https://aka.ms/apimdocs/customdomain).
+
+### SSL/TLS renegotiation blocked
+
+SSL/TLS renegotiation attempt blocked. Renegotiation happens when a client certificate is requested over an already established connection. When it is blocked, reading 'context.Request.Certificate' in policy expressions will return 'null'. To support client certificate authentication scenarios, enable 'Negotiate client certificate' on listed hostnames. For browser-based clients, enabling this option might result in a certificate prompt being presented to the client.
+
+Learn more about [Api Management - TlsRenegotiationBlocked (SSL/TLS renegotiation blocked)](/azure/api-management/api-management-howto-mutual-certificates-for-clients).
+
+## Cache
+
+### Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.
+
+Fragmentation and memory pressure can cause availability incidents during a failover or management operations. Increasing reservation of memory for fragmentation helps in reducing the cache failures when running under high memory pressure. Memory for fragmentation can be increased via maxfragmentationmemory-reserved setting available in advanced settings blade.
+
+Learn more about [Redis Cache Server - RedisCacheMemoryFragmentation (Availability may be impacted from high memory fragmentation. Increase fragmentation memory reservation to avoid potential impact.)](https://aka.ms/redis/recommendations/memory-policies).
+
+## Compute
+
+### Enable Backups on your Virtual Machines
+
+Enable backups for your virtual machines and secure your data
+
+Learn more about [Virtual machine (classic) - EnableBackup (Enable Backups on your Virtual Machines)](/azure/backup/backup-overview).
+
+### Upgrade the standard disks attached to your premium-capable VM to premium disks
+
+We have identified that you are using standard disks with your premium-capable Virtual Machines and we recommend you consider upgrading the standard disks to premium disks. For any Single Instance Virtual Machine using premium storage for all Operating System Disks and Data Disks, we guarantee you will have Virtual Machine Connectivity of at least 99.9%. Consider these factors when making your upgrade decision. The first is that upgrading requires a VM reboot and this process takes 3-5 minutes to complete. The second is if the VMs in the list are mission-critical production VMs, evaluate the improved availability against the cost of premium disks.
+
+Learn more about [Virtual machine - MigrateStandardStorageAccountToPremium (Upgrade the standard disks attached to your premium-capable VM to premium disks)](https://aka.ms/aa_storagestandardtopremium_learnmore).
+
+### Enable virtual machine replication to protect your applications from regional outage
+
+Virtual machines which do not have replication enabled to another region are not resilient to regional outages. Replicating the machines drastically reduce any adverse business impact during the time of an Azure region outage. We highly recommend enabling replication of all the business critical virtual machines from the below list so that in an event of an outage, you can quickly bring up your machines in remote Azure region.
+
+Learn more about [Virtual machine - ASRUnprotectedVMs (Enable virtual machine replication to protect your applications from regional outage)](https://aka.ms/azure-site-recovery-dr-azure-vms).
+
+### Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost
+
+We have identified that your VM is using premium unmanaged disks that can be migrated to managed disks at no additional cost. Azure Managed Disks provides higher resiliency, simplified service management, higher scale target and more choices among several disk types. This upgrade can be done through the portal in less than 5 minutes.
+
+Learn more about [Virtual machine - UpgradeVMToManagedDisksWithoutAdditionalCost (Upgrade VM from Premium Unmanaged Disks to Managed Disks at no additional cost)](https://aka.ms/md_overview).
+
+### Update your outbound connectivity protocol to Service Tags for Azure Site Recovery
+
+Using IP Address based filtering has been identified as a vulnerable way to control outbound connectivity for firewalls. It is advised to use Service Tags as an alternative for controlling connectivity. We highly recommend the use of Service Tags, to allow connectivity to Azure Site Recovery services for the machines.
+
+Learn more about [Virtual machine - ASRUpdateOutboundConnectivityProtocolToServiceTags (Update your outbound connectivity protocol to Service Tags for Azure Site Recovery)](https://aka.ms/azure-site-recovery-using-service-tags).
+
+### Use Managed Disks to improve data reliability
+
+Virtual machines in an Availability Set with disks that share either storage accounts or storage scale units are not resilient to single storage scale unit failures during outages. Migrate to Azure Managed Disks to ensure that the disks of different VMs in the Availability Set are sufficiently isolated to avoid a single point of failure.
+
+Learn more about [Availability set - ManagedDisksAvSet (Use Managed Disks to improve data reliability)](https://aka.ms/aa_avset_manageddisk_learnmore).
+
+### Check Point Virtual Machine may lose Network Connectivity.
+
+We have identified that your Virtual Machine might be running a version of Check Point image that has been known to lose network connectivity in the event of a platform servicing operation. It is recommended that you upgrade to a newer version of the image that addresses this issue. Please contact Check Point for further instructions on how to upgrade your image.
+
+Learn more about [Virtual machine - CheckPointPlatformServicingKnownIssueA (Check Point Virtual Machine may lose Network Connectivity.)](https://supportcenter.checkpoint.com/supportcenter/portal?eventSubmit_doGoviewsolutiondetails=&solutionid=sk151752&partition=Advanced&product=CloudGuard).
+
+### Access to mandatory URLs missing for your Windows Virtual Desktop environment
+
+In order for a session host to deploy and register to WVD properly, you need to add a set of URLs to allowed list in case your virtual machine runs in restricted environment. After visiting "Learn More" link, you will be able to see the minimum list of URLs you need to unblock to have a successful deployment and functional session host. For specific URL(s) missing from allowed list, you may also search Application event log for event 3702.
+
+Learn more about [Virtual machine - SessionHostNeedsAssistanceForUrlCheck (Access to mandatory URLs missing for your Windows Virtual Desktop environment)](/azure/virtual-desktop/safe-url-list).
+
+## PostgreSQL
+
+### Improve PostgreSQL availability by removing inactive logical replication slots
+
+Our internal telemetry indicates that your PostgreSQL server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+
+Learn more about [PostgreSQL server - OrcasPostgreSqlLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_logical_decoding).
+
+### Improve PostgreSQL availability by removing inactive logical replication slots
+
+Our internal telemetry indicates that your PostgreSQL flexible server may have inactive logical replication slots. THIS NEEDS IMMEDIATE ATTENTION. This can result in degraded server performance and unavailability due to WAL file retention and buildup of snapshot files. To improve performance and availability, we STRONGLY recommend that you IMMEDIATELY either delete the inactive replication slots, or start consuming the changes from these slots so that the slots' Log Sequence Number (LSN) advances and is close to the current LSN of the server.
+
+Learn more about [Azure Database for PostgreSQL flexible server - OrcasPostgreSqlFlexibleServerLogicalReplicationSlots (Improve PostgreSQL availability by removing inactive logical replication slots)](https://aka.ms/azure_postgresql_flexible_server_logical_decoding).
+
+## IoT Hub
+
+### Upgrade device client SDK to a supported version for IotHub
+
+Some or all of your devices are using outdated SDK and we recommend you upgrade to a supported version of SDK. See the details in the recommendation.
+
+Learn more about [IoT hub - UpgradeDeviceClientSdk (Upgrade device client SDK to a supported version for IotHub)](https://aka.ms/iothubsdk).
+
+## Cosmos DB
+
+### Configure Consistent indexing mode on your Azure Cosmos container
+
+We noticed that your Azure Cosmos container is configured with the Lazy indexing mode, which may impact the freshness of query results. We recommend switching to Consistent mode.
+
+Learn more about [Cosmos DB account - CosmosDBLazyIndexing (Configure Consistent indexing mode on your Azure Cosmos container)](/azure/cosmos-db/how-to-manage-indexing-policy).
+
+### Upgrade your old Azure Cosmos DB SDK to the latest version
+
+Your Azure Cosmos DB account is using an old version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Cosmos DB account - CosmosDBUpgradeOldSDK (Upgrade your old Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+
+### Upgrade your outdated Azure Cosmos DB SDK to the latest version
+
+Your Azure Cosmos DB account is using an outdated version of the SDK. We recommend upgrading to the latest version for the latest fixes, performance improvements, and new feature capabilities.
+
+Learn more about [Cosmos DB account - CosmosDBUpgradeOutdatedSDK (Upgrade your outdated Azure Cosmos DB SDK to the latest version)](/azure/cosmos-db/).
+
+### Configure your Azure Cosmos DB containers with a partition key
+
+Your Azure Cosmos DB non-partitioned collections are approaching their provisioned storage quota. Please migrate these collections to new collections with a partition key definition so that they can automatically be scaled out by the service.
+
+Learn more about [Cosmos DB account - CosmosDBFixedCollections (Configure your Azure Cosmos DB containers with a partition key)](/azure/cosmos-db/partitioning-overview#choose-partitionkey).
+
+### Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+
+Your Azure Cosmos DB API for MongoDB account is eligible to upgrade to version 4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0.
+
+Learn more about [Cosmos DB account - CosmosDBMongoSelfServeUpgrade (Upgrade your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-version-upgrade).
+
+### Add a second region to your production workloads on Azure Cosmos DB
+
+Based on their names and configuration, we have detected the Azure Cosmos DB accounts below as being potentially used for production workloads. These accounts currently run in a single Azure region. You can increase their availability by configuring them to span at least two Azure regions.
+
+> [!NOTE]
+> Additional regions will incur extra costs.
+
+Learn more about [Cosmos DB account - CosmosDBSingleRegionProdAccounts (Add a second region to your production workloads on Azure Cosmos DB)](/azure/cosmos-db/high-availability).
+
+### Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account
+
+We observed your account is throwing a TooManyRequests error with the 16500 error code. Enabling Server Side Retry (SSR) can help mitigate this issue for you.
+
+Learn more about [Cosmos DB account - CosmosDBMongoServerSideRetries (Enable Server Side Retry (SSR) on your Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/prevent-rate-limiting-errors).
+
+### Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features
+
+Migrate your database account to a new database account to take advantage of Azure Cosmos DB's API for MongoDB v4.0. Upgrading to v4.0 can reduce your storage costs by up to 55% and your query costs by up to 45% by leveraging a new storage format. Numerous additional features such as multi-document transactions are also included in v4.0. When upgrading, you must also migrate the data in your existing account to a new account created using version 4.0. Azure Data Factory or Studio 3T can assist you in migrating your data.
+
+Learn more about [Cosmos DB account - CosmosDBMongoMigrationUpgrade (Migrate your Azure Cosmos DB API for MongoDB account to v4.0 to save on query/storage costs and utilize new features)](/azure/cosmos-db/mongodb-feature-support-40).
+
+### Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key
+
+It appears that your key vault's configuration is preventing your Cosmos DB account from contacting the key vault to access your managed encryption keys. If you've recently performed a key rotation, make sure that the previous key or key version remains enabled and available until Cosmos DB has completed the rotation. The previous key or key version can be disabled after 24 hours, or after the Azure Key Vault audit logs don't show activity from Azure Cosmos DB on that key or key version anymore.
+
+Learn more about [Cosmos DB account - CosmosDBKeyVaultWrap (Your Cosmos DB account is unable to access its linked Azure Key Vault hosting your encryption key)](/azure/cosmos-db/how-to-setup-cmk).
+
+### Avoid being rate limited from metadata operations
+
+We found a high number of metadata operations on your account. Your data in Cosmos DB, including metadata about your databases and collections is distributed across partitions. Metadata operations have a system-reserved request unit (RU) limit. Avoid being rate limited from metadata operations by using static Cosmos DB client instances in your code and caching the names of databases and collections.
+
+Learn more about [Cosmos DB account - CosmosDBHighMetadataOperations (Avoid being rate limited from metadata operations)](/azure/cosmos-db/performance-tips).
+
+### Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account
+
+We observed some of your applications are connecting to your upgraded Azure Cosmos DB's API for MongoDB account using the legacy 3.2 endpoint - [accountname].documents.azure.com. Use the new endpoint - [accountname].mongo.cosmos.azure.com (or its equivalent in sovereign, government, or restricted clouds).
+
+Learn more about [Cosmos DB account - CosmosDBMongoNudge36AwayFrom32 (Use the new 3.6+ endpoint to connect to your upgraded Azure Cosmos DB's API for MongoDB account)](/azure/cosmos-db/mongodb-feature-support-40).
+
+### Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated
+
+There is a critical bug in version 2.6.13 and lower of the Azure Cosmos DB Async Java SDK v2 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container. Note: This is a critical hotfix for the Async Java SDK v2, however it is still highly recommended you migrate to the [Java SDK v4](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV2 (Upgrade to 2.6.14 version of the Async Java SDK v2 to avoid a critical issue or upgrade to Java SDK v4 as Async Java SDK v2 is being deprecated)](/azure/cosmos-db/sql/sql-api-sdk-async-java).
+
+### Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue
+
+There is a critical bug in version 4.15 and lower of the Azure Cosmos DB Java SDK v4 causing errors when a Global logical sequence number (LSN) greater than the Max Integer value is reached. This happens transparent to you by the service after a large volume of transactions occur in the lifetime of an Azure Cosmos DB container.
+
+Learn more about [Cosmos DB account - CosmosDBMaxGlobalLSNReachedV4 (Upgrade to the current recommended version of the Java SDK v4 to avoid a critical issue)](/azure/cosmos-db/sql/sql-api-sdk-java-v4).
+
+## Fluid Relay
+
+### Upgrade your Azure Fluid Relay client library
+
+You have recently invoked the Azure Fluid Relay service with an old client library. Your Azure Fluid Relay client library should now be upgraded to the latest version to ensure your application remains operational. Upgrading will provide the most up-to-date functionality, as well as enhancements in performance and stability. For more information on the latest version to use and how to upgrade, please refer to the article.
+
+Learn more about [FluidRelay Server - UpgradeClientLibrary (Upgrade your Azure Fluid Relay client library)](https://github.com/microsoft/FluidFramework).
+
+## HDInsight
+
+### Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster
+
+Starting July 1, 2020, customers will not be able to create new Kafka clusters with Kafka 1.1 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft. Consider moving to Kafka 2.1 on HDInsight 4.0 by June 30 2020 to avoid potential system/support interruption.
+
+Learn more about [HDInsight cluster - KafkaVersionRetirement (Deprecation of Kafka 1.1 in HDInsight 4.0 Kafka cluster)](https://aka.ms/hdiretirekafka).
+
+### Deprecation of Older Spark Versions in HDInsight Spark cluster
+
+Starting July 1, 2020, customers will not be able to create new Spark clusters with Spark 2.1 and 2.2 on HDInsight 3.6, and Spark 2.3 on HDInsight 4.0. Existing clusters will run as is without support from Microsoft.
+
+Learn more about [HDInsight cluster - SparkVersionRetirement (Deprecation of Older Spark Versions in HDInsight Spark cluster)](https://aka.ms/hdiretirespark).
+
+### Enable critical updates to be applied to your HDInsight clusters
+
+HDInsight service is applying an important certificate related update to your cluster. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please take actions to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 13, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 13, 2021 05:00 PM UTC and Jan 16, 2021 05:00 PM UTC. Failure to apply this update may result in your clusters becoming unhealthy and unusable.
+
+Learn more about [HDInsight cluster - GCSCertRotation (Enable critical updates to be applied to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Drop and recreate your HDInsight clusters to apply critical updates
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters.
+
+Learn more about [HDInsight cluster - GCSCertRotationRound2 (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Drop and recreate your HDInsight clusters to apply critical updates
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, due to some custom configuration changes, we are unable to apply the certificate updates on some of your clusters. Please drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable.
+
+Learn more about [HDInsight cluster - GCSCertRotationR3DropRecreate (Drop and recreate your HDInsight clusters to apply critical updates)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Apply critical updates to your HDInsight clusters
+
+The HDInsight service has attempted to apply a critical certificate update on all your running clusters. However, one or more policies in your subscription are preventing HDInsight service from creating or modifying network resources (Load balancer, Network Interface and Public IP address) associated with your clusters and applying this update. Please remove or update your policy assignment to allow HDInsight service to create or modify network resources (Load balancer, Network interface and Public IP address) associated with your clusters before Jan 21, 2021 05:00 PM UTC. The HDInsight team will be performing updates between Jan 21, 2021 05:00 PM UTC and Jan 23, 2021 05:00 PM UTC. To verify the policy update, you can try to create network resources (Load balancer, Network interface and Public IP address) in the same resource group and Subnet where your cluster is in. Failure to apply this update may result in your clusters becoming unhealthy and unusable. You can also drop and recreate your cluster before Jan 25th, 2021 to prevent the cluster from becoming unhealthy and unusable. The HDInsight service will send another notification if we failed to apply the update to your clusters.
+
+Learn more about [HDInsight cluster - GCSCertRotationR3PlanPatch (Apply critical updates to your HDInsight clusters)](/azure/hdinsight/hdinsight-hadoop-provision-linux-clusters).
+
+### Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021
+
+You're receiving this notice because you have one or more active A8, A9, A10 or A11 HDInsight cluster. The A8-A11 virtual machines (VMs) will be retired in all regions on 1 March 2021. After that date, all clusters using A8-A11 will be deallocated. Migrate your affected clusters to another HDInsight supported VM (https://azure.microsoft.com/pricing/details/hdinsight/) before that date. For more details, see 'Learn More' link or contact us at askhdinsight@microsoft.com
+
+Learn more about [HDInsight cluster - VMDeprecation (Action required: Migrate your A8ΓÇôA11 HDInsight cluster before 1 March 2021)](https://azure.microsoft.com/updates/a8-a11-azure-virtual-machine-sizes-will-be-retired-on-march-1-2021/).
+
+## Hybrid Compute
+
+### Upgrade to the latest version of the Azure Connected Machine agent
+
+The Azure Connected Machine agent is updated regularly with bug fixes, stability enhancements, and new functionality. Upgrade your agent to the latest version for the best Azure Arc experience.
+
+Learn more about [Machine - Azure Arc - ArcServerAgentVersion (Upgrade to the latest version of the Azure Connected Machine agent)](/azure/azure-arc/servers/manage-agent).
+
+## Kubernetes
+
+### Pod Disruption Budgets Recommended
+
+Pod Disruption Budgets Recommended. Improve service high availability.
+
+Learn more about [Kubernetes service - PodDisruptionBudgetsRecommended (Pod Disruption Budgets Recommended)](https://aka.ms/aks-pdb).
+
+### Upgrade to the latest agent version of Azure Arc-enabled Kubernetes
+
+Upgrade to the latest agent version for the best Azure Arc enabled Kubernetes experience, improved stability and new functionality.
+
+Learn more about [Kubernetes - Azure Arc - Arc-enabled K8s agent version upgrade (Upgrade to the latest agent version of Azure Arc-enabled Kubernetes)](https://aka.ms/ArcK8sAgentUpgradeDocs).
+
+## Media Services
+
+### Increase Media Services quotas or limits to ensure continuity of service.
+
+Please be advised that your media account is about to hit its quota limits. Please review current usage of Assets, Content Key Policies and Stream Policies for the media account. To avoid any disruption of service, you should request quota limits to be increased for the entities that are closer to hitting quota limit. You can request quota limits to be increased by opening a ticket and adding relevant details to it. Please don't create additional Azure Media accounts in an attempt to obtain higher limits.
+
+Learn more about [Media Service - AccountQuotaLimit (Increase Media Services quotas or limits to ensure continuity of service.)](https://aka.ms/ams-quota-recommendation/).
+
+## Networking
+
+### Upgrade your SKU or add more instances to ensure fault tolerance
+
+Deploying two or more medium or large sized instances will ensure business continuity during outages caused by planned or unplanned maintenance.
+
+Learn more about [Application gateway - AppGateway (Upgrade your SKU or add more instances to ensure fault tolerance)](https://aka.ms/aa_gatewayrec_learnmore).
+
+### Move to production gateway SKUs from Basic gateways
+
+The VPN gateway Basic SKU is designed for development or testing scenarios. Please move to a production SKU if you are using the VPN gateway for production purposes. The production SKUs offer higher number of tunnels, BGP support, active-active, custom IPsec/IKE policy in addition to higher stability and availability.
+
+Learn more about [Virtual network gateway - BasicVPNGateway (Move to production gateway SKUs from Basic gateways)](https://aka.ms/aa_basicvpngateway_learnmore).
+
+### Add at least one more endpoint to the profile, preferably in another Azure region
+
+Profiles should have more than one endpoint to ensure availability if one of the endpoints fails. It is also recommended that endpoints be in different regions.
+
+Learn more about [Traffic Manager profile - GeneralProfile (Add at least one more endpoint to the profile, preferably in another Azure region)](https://aka.ms/AA1o0x4).
+
+### Add an endpoint configured to "All (World)"
+
+For geographic routing, traffic is routed to endpoints based on defined regions. When a region fails, there is no pre-defined failover. Having an endpoint where the Regional Grouping is configured to "All (World)" for geographic profiles will avoid traffic black holing and guarantee service remains available.
+
+Learn more about [Traffic Manager profile - GeographicProfile (Add an endpoint configured to \"All (World)\")](https://aka.ms/Rf7vc5).
+
+### Add or move one endpoint to another Azure region
+
+All endpoints associated to this proximity profile are in the same region. Users from other regions may experience long latency when attempting to connect. Adding or moving an endpoint to another region will improve overall performance for proximity routing and provide better availability if all endpoints in one region fail.
+
+Learn more about [Traffic Manager profile - ProximityProfile (Add or move one endpoint to another Azure region)](https://aka.ms/Ldkkdb).
+
+### Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency
+
+We have detected that your ExpressRoute gateway only has 1 ExpressRoute circuit associated to it. Connect 1 or more additional circuits to your gateway to ensure peering location redundancy and resiliency
+
+Learn more about [Virtual network gateway - ExpressRouteGatewayRedundancy (Implement multiple ExpressRoute circuits in your Virtual Network for cross premises resiliency)](/azure/expressroute/designing-for-high-availability-with-expressroute).
+
+### Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit
+
+We have detected that your ExpressRoute circuit is not currently being monitored by ExpressRoute Monitor on Network Performance Monitor. ExpressRoute monitor provides end-to-end monitoring capabilities including: Loss, latency, and performance from on-premises to Azure and Azure to on-premises
+
+Learn more about [ExpressRoute circuit - ExpressRouteGatewayE2EMonitoring (Implement ExpressRoute Monitor on Network Performance Monitor for end-to-end monitoring of your ExpressRoute circuit)](/azure/expressroute/how-to-npm).
+
+### Avoid hostname override to ensure site integrity
+
+Try to avoid overriding the hostname when configuring Application Gateway. Having a different domain on the frontend of Application Gateway than the one which is used to access the backend can potentially lead to cookies or redirect urls being broken. Note that this might not be the case in all situations and that certain categories of backends (like REST API's) in general are less sensitive to this. Please make sure the backend is able to deal with this or update the Application Gateway configuration so the hostname does not need to be overwritten towards the backend. When used with App Service, attach a custom domain name to the Web App and avoid use of the *.azurewebsites.net host name towards the backend.
+
+Learn more about [Application gateway - AppGatewayHostOverride (Avoid hostname override to ensure site integrity)](https://aka.ms/appgw-advisor-usecustomdomain).
+
+### Use ExpressRoute Global Reach to improve your design for disaster recovery
+
+You appear to have ExpressRoute circuits peered in at least two different locations. Connect them to each other using ExpressRoute Global Reach to allow traffic to continue flowing between your on-premises network and Azure environments in the event of one circuit losing connectivity. You can establish Global Reach connections between circuits in different peering locations within the same metro or across metros.
+
+Learn more about [ExpressRoute circuit - UseGlobalReachForDR (Use ExpressRoute Global Reach to improve your design for disaster recovery)](/azure/expressroute/about-upgrade-circuit-bandwidth).
+
+### Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule
+
+In response to log4j2 vulnerability (CVE-2021-44228), Azure Web Application Firewall (WAF) RuleSet CRS 3.1/3.2 has been updated on your Application Gateway to help provide additional protection from this vulnerability. The rules are available under Rule 944240 and no action is needed to enable this.
+
+Learn more about [Application gateway - AppGwLog4JCVEPatchNotification (Azure WAF RuleSet CRS 3.1/3.2 has been updated with log4j2 vulnerability rule)](https://aka.ms/log4jcve).
+
+### Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228)
+
+To mitigate the impact of Log4j2 vulnerability, we recommend these steps:
+
+1) Upgrade Log4j2 to version 2.15.0 on your backend servers. If upgrade isn't possible, follow the system property guidance link below.
+2) Take advantage of WAF Core rule sets (CRS) by upgrading to WAF SKU
+
+Learn more about [Application gateway - AppGwLog4JCVEGenericNotification (Additional protection to mitigate Log4j2 vulnerability (CVE-2021-44228))](https://aka.ms/log4jcve).
+
+### Enable Active-Active gateways for redundancy
+
+In active-active configuration, both instances of the VPN gateway will establish S2S VPN tunnels to your on-premise VPN device. When a planned maintenance or unplanned event happens to one gateway instance, traffic will be switched over to the other active IPsec tunnel automatically.
+
+Learn more about [Virtual network gateway - VNetGatewayActiveActive (Enable Active-Active gateways for redundancy)](https://aka.ms/aa_vpnha_learnmore).
+
+## Recovery Services
+
+### Enable soft delete for your Recovery Services vaults
+
+Soft delete helps you retain your backup data in the Recovery Services vault for an additional duration after deletion, giving you an opportunity to retrieve it before it is permanently deleted.
+
+Learn more about [Recovery Services vault - AB-SoftDeleteRsv (Enable soft delete for your Recovery Services vaults)](/azure/backup/backup-azure-security-feature-cloud).
+
+### Enable Cross Region Restore for your recovery Services Vault
+
+Enabling cross region restore for your geo-redundant vaults
+
+Learn more about [Recovery Services vault - Enable CRR (Enable Cross Region Restore for your recovery Services Vault)](/azure/backup/backup-azure-arm-restore-vms#cross-region-restore).
+
+## Search
+
+### You are close to exceeding storage quota of 2GB. Create a Standard search service.
+
+You are close to exceeding storage quota of 2GB. Create a Standard search service. Indexing operations will stop working when storage quota is exceeded.
+
+Learn more about [Search service - BasicServiceStorageQuota90percent (You are close to exceeding storage quota of 2GB. Create a Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+### You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.
+
+You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service. Indexing operations will stop working when storage quota is exceeded.
+
+Learn more about [Search service - FreeServiceStorageQuota90percent (You are close to exceeding storage quota of 50MB. Create a Basic or Standard search service.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+### You are close to exceeding your available storage quota. Add additional partitions if you need more storage.
+
+You are close to exceeding your available storage quota. Add additional partitions if you need more storage. After exceeding storage quota, you can still query, but indexing operations will no longer work.
+
+Learn more about [Search service - StandardServiceStorageQuota90percent (You are close to exceeding your available storage quota. Add additional partitions if you need more storage.)](https://aka.ms/azs/search-limits-quotas-capacity).
+
+## Storage
+
+### Enable Soft Delete to protect your blob data
+
+After enabling Soft Delete, deleted data transitions to a soft deleted state instead of being permanently deleted. When data is overwritten, a soft deleted snapshot is generated to save the state of the overwritten data. You can configure the amount of time soft deleted data is recoverable before it permanently expires.
+
+Learn more about [Storage Account - StorageSoftDelete (Enable Soft Delete to protect your blob data)](https://aka.ms/softdelete).
+
+### Use Managed Disks for storage accounts reaching capacity limit
+
+We have identified that you are using Premium SSD Unmanaged Disks in Storage account(s) that are about to reach Premium Storage capacity limit. To avoid failures when the limit is reached, we recommend migrating to Managed Disks that do not have account capacity limit. This migration can be done through the portal in less than 5 minutes.
+
+Learn more about [Storage Account - StoragePremiumBlobQuotaLimit (Use Managed Disks for storage accounts reaching capacity limit)](https://aka.ms/premium_blob_quota).
+
+## Web
+
+### Consider scaling out your App Service Plan to avoid CPU exhaustion
+
+Your App reached >90% CPU over the last couple of days. High CPU utilization can lead to runtime issues with your apps, to solve this you could scale out your app.
+
+Learn more about [App service - AppServiceCPUExhaustion (Consider scaling out your App Service Plan to avoid CPU exhaustion)](https://aka.ms/antbc-cpu).
+
+### Fix the backup database settings of your App Service resource
+
+Your app's backups are consistently failing due to invalid DB configuration, you can find more details in backup history.
+
+Learn more about [App service - AppServiceFixBackupDatabaseSettings (Fix the backup database settings of your App Service resource)](https://aka.ms/antbc).
+
+### Consider scaling up your App Service Plan SKU to avoid memory exhaustion
+
+The App Service Plan containing your app reached >85% memory allocated. High memory consumption can lead to runtime issues with your apps. Investigate which app in the App Service Plan is exhausting memory and scale up to a higher plan with more memory resources if needed.
+
+Learn more about [App service - AppServiceMemoryExhaustion (Consider scaling up your App Service Plan SKU to avoid memory exhaustion)](https://aka.ms/antbc-memory).
+
+### Scale up your App Service resource to remove the quota limit
+
+Your app is part of a shared App Service plan and has met its quota multiple times. After meeting a quota, your web app canΓÇÖt accept incoming requests. To remove the quota, upgrade to a Standard plan.
+
+Learn more about [App service - AppServiceRemoveQuota (Scale up your App Service resource to remove the quota limit)](https://aka.ms/ant-asp).
+
+### Use deployment slots for your App Service resource
+
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+
+Learn more about [App service - AppServiceUseDeploymentSlots (Use deployment slots for your App Service resource)](https://aka.ms/ant-staging).
+
+### Fix the backup storage settings of your App Service resource
+
+Your app's backups are consistently failing due to invalid storage settings, you can find more details in backup history.
+
+Learn more about [App service - AppServiceFixBackupStorageSettings (Fix the backup storage settings of your App Service resource)](https://aka.ms/antbc).
+
+### Move your App Service resource to Standard or higher and use deployment slots
+
+You have deployed your application multiple times over the last week. Deployment slots help you manage changes and help you reduce deployment impact to your production web app.
+
+Learn more about [App service - AppServiceStandardOrHigher (Move your App Service resource to Standard or higher and use deployment slots)](https://aka.ms/ant-staging).
+
+### Consider scaling out your App Service Plan to optimize user experience and availability.
+
+Consider scaling out your App Service Plan to at least two instances to avoid cold start delays and service interruptions during routine maintenance.
+
+Learn more about [App Service plan - AppServiceNumberOfInstances (Consider scaling out your App Service Plan to optimize user experience and availability.)](https://aka.ms/appsvcnuminstances).
+
+### Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.
+
+The combined bandwidth used by all the Free SKU Static Web Apps in this subscription is exceeding the monthly limit of 100GB. Consider upgrading these apps to Standard SKU to avoid throttling.
+
+Learn more about [Static Web App - StaticWebAppsUpgradeToStandardSKU (Consider upgrading the hosting plan of the Static Web App(s) in this subscription to Standard SKU.)](https://azure.microsoft.com/pricing/details/app-service/static/).
+
+### Application code should be fixed as worker process crashed due to Unhandled Exception
+
+We identified the below thread resulted in an unhandled exception for your App and application code should be fixed to prevent impact to application availability. A crash happens when an exception in your code goes un-handled and terminates the process.
+
+Learn more about [App service - AppServiceProactiveCrashMonitoring (Application code should be fixed as worker process crashed due to Unhandled Exception)](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html).
++
+## Next steps
+
+Learn more about [Reliability - Microsoft Azure Well Architected Framework](/azure/architecture/framework/resiliency/overview)
advisor Advisor Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/advisor/advisor-security-recommendations.md
For more information about security recommendations, see [Review your security r
To learn more about Advisor recommendations, see: * [Introduction to Advisor](advisor-overview.md) * [Get started with Advisor](advisor-get-started.md)
-* [Advisor cost recommendations](advisor-cost-recommendations.md)
-* [Advisor performance recommendations](advisor-performance-recommendations.md)
-* [Advisor reliability recommendations](advisor-high-availability-recommendations.md)
-* [Advisor operational excellence recommendations](advisor-operational-excellence-recommendations.md)
+* [Advisor cost recommendations](advisor-reference-cost-recommendations.md)
+* [Advisor performance recommendations](advisor-reference-performance-recommendations.md)
+* [Advisor reliability recommendations](advisor-reference-reliability-recommendations.md)
+* [Advisor operational excellence recommendations](advisor-reference-operational-excellence-recommendations.md)
* [Advisor REST API](/rest/api/advisor/)
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Rotate certificates in Azure Kubernetes Service (AKS)
description: Learn how to rotate your certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 1/9/2022 Last updated : 3/3/2022 # Rotate certificates in Azure Kubernetes Service (AKS)
az vmss run-command invoke -g MC_rg_myAKSCluster_region -n vmss-name --instance-
Azure Kubernetes Service will automatically rotate non-ca certificates on both the control plane and agent nodes before they expire with no downtime for the cluster.
-For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/). TLS Bootstrapping is currently available in the following regions:
-
-* eastus2euap
-* centraluseuap
-* westcentralus
-* uksouth
-* eastus
-* australiacentral
-* australiaest
+For AKS to automatically rotate non-CA certificates, the cluster must have [TLS Bootstrapping](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-tls-bootstrapping/).
#### How to check whether current agent node pool is TLS Bootstrapping enabled? To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths. On a Linux node: /var/lib/kubelet/bootstrap-kubeconfig, on a Windows node, itΓÇÖs c:\k\bootstrap-config.
aks Custom Node Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/custom-node-configuration.md
The settings below can be used to tune the operation of the virtual memory (VM)
| `transparentHugePageEnabled` | `always`, `madvise`, `never` | `always` | [Transparent Hugepages](https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge) is a Linux kernel feature intended to improve performance by making more efficient use of your processorΓÇÖs memory-mapping hardware. When enabled the kernel attempts to allocate `hugepages` whenever possible and any Linux process will receive 2-MB pages if the `mmap` region is 2 MB naturally aligned. In certain cases when `hugepages` are enabled system wide, applications may end up allocating more memory resources. An application may `mmap` a large region but only touch 1 byte of it, in that case a 2-MB page might be allocated instead of a 4k page for no good reason. This scenario is why it's possible to disable `hugepages` system-wide or to only have them inside `MADV_HUGEPAGE madvise` regions. | | `transparentHugePageDefrag` | `always`, `defer`, `defer+madvise`, `madvise`, `never` | `madvise` | This value controls whether the kernel should make aggressive use of memory compaction to make more `hugepages` available. | ++ > [!IMPORTANT] > For ease of search and readability the OS settings are displayed in this document by their name but should be added to the configuration json file or AKS API using [camelCase capitalization convention](/dotnet/standard/design-guidelines/capitalization-conventions).
Add a new node pool specifying the Kubelet parameters using the JSON file you cr
az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --kubelet-config ./kubeletconfig.json ``` +
+## Other configuration
+
+The settings below can be used to modify other Operating System settings.
+
+### Message of the Day
+
+Pass the ```--message-of-the-day``` flag with the location of the file to replace the Message of the Day on Linux nodes at cluster creation or node pool creation.
++
+#### Cluster creation
+```azurecli
+az aks create --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt
+```
+
+#### Nodepool creation
+```azurecli
+az aks nodepool add --name mynodepool1 --cluster-name myAKSCluster --resource-group myResourceGroup --message-of-the-day ./newMOTD.txt
+```
+++ ## Next steps - Learn [how to configure your AKS cluster](cluster-configuration.md).
aks Open Service Mesh Deploy Addon Az Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-az-cli.md
Title: Install the Open Service Mesh (OSM) add-on using Azure CLI
-description: Install Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI
+ Title: Install the Open Service Mesh add-on by using the Azure CLI
+description: Use Azure CLI commands to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster.
Last updated 11/10/2021
-# Install the Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Azure CLI
+# Install the Open Service Mesh add-on by using the Azure CLI
-This article shows you how to install the OSM add-on on an AKS cluster and verify it is installed and running.
+This article shows you how to install the Open Service Mesh (OSM) add-on on an Azure Kubernetes Service (AKS) cluster and verify that it's installed and running.
> [!IMPORTANT] > The OSM add-on installs version *1.0.0* of OSM on your cluster.
This article shows you how to install the OSM add-on on an AKS cluster and verif
* An Azure subscription. If you don't have an Azure subscription, you can create a [free account](https://azure.microsoft.com/free). * [Azure CLI installed](/cli/azure/install-azure-cli).
-## Install the OSM AKS add-on on your cluster
+## Install the OSM add-on on your cluster
-To install the OSM AKS add-on, use `--enable-addons open-service-mesh` when creating or updating a cluster.
+To install the OSM add-on, use `--enable-addons open-service-mesh` when creating or updating a cluster.
-The following example creates a *myResourceGroup* resource group. Then creates a *myAKSCluster* cluster with a three nodes and the OSM add-on.
+The following example creates a *myResourceGroup* resource group. Then it creates a *myAKSCluster* cluster with three nodes and the OSM add-on.
```azurecli-interactive az group create --name myResourceGroup --location eastus
az aks create \
--enable-addons open-service-mesh ```
-For existing clusters, use `az aks enable-addons`. For example:
+For existing clusters, use `az aks enable-addons`. The following code shows an example.
> [!IMPORTANT] > You can't enable the OSM add-on on an existing cluster if an OSM mesh is already on your cluster. Uninstall any existing OSM meshes on your cluster before enabling the OSM add-on.
az aks enable-addons \
## Get the credentials for your cluster
-Get the credentials for your AKS cluster using the `az aks get-credentials` command. The following example command gets the credentials for the *myAKSCluster* in the *myResourceGroup* resource group.
+Get the credentials for your AKS cluster by using the `az aks get-credentials` command. The following example command gets the credentials for *myAKSCluster* in the *myResourceGroup* resource group:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster ```
-## Verify the OSM add-on is installed on your cluster
+## Verify that the OSM add-on is installed on your cluster
-To see if the OSM add-on is enabled on your cluster, verify the *enabled* value shows a *true* for *openServiceMesh* under *addonProfiles*. The following example shows the status of the OSM add-on for the *myAKSCluster* in *myResourceGroup*.
+To see if the OSM add-on is installed on your cluster, verify that the `enabled` value is `true` for `openServiceMesh` under `addonProfiles`. The following example shows the status of the OSM add-on for *myAKSCluster* in *myResourceGroup*:
```azurecli-interactive az aks show --resource-group myResourceGroup --name myAKSCluster --query 'addonProfiles.openServiceMesh.enabled' ```
-## Verify the OSM mesh is running on your cluster
+## Verify that the OSM mesh is running on your cluster
-In addition to verifying the OSM add-on has been enabled on your cluster, you can also verify the version, status, and configuration of the OSM mesh running on your cluster.
-
-To verify the version of the OSM mesh running on your cluster, use `kubectl` to display the image version of the *osm-controller* deployment. For example:
+You can verify the version, status, and configuration of the OSM mesh that's running on your cluster. Use `kubectl` to display the image version of the *osm-controller* deployment. For example:
```azurecli-interactive kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.template.spec.containers[:1].image}'
$ kubectl get deployment -n kube-system osm-controller -o=jsonpath='{$.spec.temp
mcr.microsoft.com/oss/openservicemesh/osm-controller:v0.11.1 ```
-To verify the status of the OSM components running on your cluster, use `kubectl` to show the status of the *app.kubernetes.io/name=openservicemesh.io* deployments, pods, and services. For example:
+To verify the status of the OSM components running on your cluster, use `kubectl` to show the status of the `app.kubernetes.io/name=openservicemesh.io` deployments, pods, and services. For example:
```azurecli-interactive kubectl get deployments -n kube-system --selector app.kubernetes.io/name=openservicemesh.io
kubectl get services -n kube-system --selector app.kubernetes.io/name=openservic
``` > [!IMPORTANT]
-> If any pods have a status other than *Running*, such as *Pending*, your cluster may not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the VM SKU, before continuing to use OSM on your cluster.
+> If any pods have a status other than `Running`, such as `Pending`, your cluster might not have enough resources to run OSM. Review the sizing for your cluster, such as the number of nodes and the virtual machine's SKU, before continuing to use OSM on your cluster.
To verify the configuration of your OSM mesh, use `kubectl get meshconfig`. For example:
To verify the configuration of your OSM mesh, use `kubectl get meshconfig`. For
kubectl get meshconfig osm-mesh-config -n kube-system -o yaml ```
-The following sample output shows the configuration of an OSM mesh:
+The following example output shows the configuration of an OSM mesh:
```yaml apiVersion: config.openservicemesh.io/v1alpha1
spec:
useHTTPSIngress: false ```
-The above example output shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has a permissive traffic policy mode enabled. With permissive traffic mode enabled in your OSM mesh:
+The preceding example shows `enablePermissiveTrafficPolicyMode: true`, which means OSM has permissive traffic policy mode enabled. With this mode enabled in your OSM mesh:
* The [SMI][smi] traffic policy enforcement is bypassed. * OSM automatically discovers services that are a part of the service mesh. * OSM creates traffic policy rules on each Envoy proxy sidecar to be able to communicate with these services. -- ## Delete your cluster
-When the cluster is no longer needed, use the `az group delete` command to remove the resource group, cluster, and all related resources.
+When you no longer need the cluster, use the `az group delete` command to remove the resource group, the cluster, and all related resources:
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait ```
-Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall].
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster, and then verify that it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh.
[aks-ephemeral]: cluster-configuration.md#ephemeral-os [osm-sample]: open-service-mesh-deploy-new-application.md
aks Open Service Mesh Deploy Addon Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/open-service-mesh-deploy-addon-bicep.md
Title: Deploy Open Service Mesh AKS add-on using Bicep
-description: Deploy Open Service Mesh on Azure Kubernetes Service (AKS) using Bicep
+ Title: Deploy the Open Service Mesh add-on by using Bicep
+description: Use a Bicep template to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS).
Last updated 9/20/2021
-# Deploy Open Service Mesh (OSM) Azure Kubernetes Service (AKS) add-on using Bicep
+# Deploy the Open Service Mesh add-on by using Bicep
-This article will discuss how to deploy the OSM add-on to AKS using a [Bicep](../azure-resource-manager/bicep/index.yml) template.
+This article shows you how to deploy the Open Service Mesh (OSM) add-on to Azure Kubernetes Service (AKS) by using a [Bicep](../azure-resource-manager/bicep/index.yml) template.
> [!IMPORTANT] > The OSM add-on installs version *1.0.0* of OSM on your cluster.
-[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources. Bicep can be used in place of creating Azure [ARM](../azure-resource-manager/templates/overview.md) templates for deploying your infrastructure-as-code Azure resources.
+[Bicep](../azure-resource-manager/bicep/overview.md) is a domain-specific language that uses declarative syntax to deploy Azure resources. You can use Bicep in place of creating [Azure Resource Manager templates](../azure-resource-manager/templates/overview.md) to deploy your infrastructure-as-code Azure resources.
## Prerequisites -- The Azure CLI, version 2.20.0 or later-- OSM version v0.11.1 or later-- An SSH Public Key used for deploying AKS-- [Visual Studio Code](https://code.visualstudio.com/) utilizing a Bash terminal-- Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)
+- Azure CLI version 2.20.0 or later
+- OSM version 0.11.1 or later
+- An SSH public key used for deploying AKS
+- [Visual Studio Code](https://code.visualstudio.com/) with a Bash terminal
+- The Visual Studio Code [Bicep extension](../azure-resource-manager/bicep/install.md)
-## Install the OSM AKS add-on for a new AKS cluster using Bicep
+## Install the OSM add-on for a new AKS cluster by using Bicep
-For a new AKS cluster deployment scenario, start with a brand new deployment of an AKS cluster with the OSM add-on enabled at the cluster create operation. The following set of directions will use a generic Bicep template that deploys an AKS cluster using ephemeral disks, using the [`kubenet`](./configure-kubenet.md) CNI, and enabling the AKS OSM add-on. For more advanced deployment scenarios visit the [Bicep](../azure-resource-manager/bicep/overview.md) documentation.
+For deployment of a new AKS cluster, you enable the OSM add-on at cluster creation. The following instructions use a generic Bicep template that deploys an AKS cluster by using ephemeral disks and the [`kubenet`](./configure-kubenet.md) container network interface, and then enables the OSM add-on. For more advanced deployment scenarios, see [What is Bicep?](../azure-resource-manager/bicep/overview.md).
### Create a resource group
-In Azure, you can associate related resources using a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example is used to create a resource group named in a specified Azure location (region):
+In Azure, you can associate related resources by using a resource group. Create a resource group by using [az group create](/cli/azure/group#az_group_create). The following example creates a resource group named *my-osm-bicep-aks-cluster-rg* in a specified Azure location (region):
```azurecli-interactive az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region>
az group create --name <my-osm-bicep-aks-cluster-rg> --location <azure-region>
### Create the main and parameters Bicep files
-Using Visual Studio Code with a bash terminal open, create a directory to store the necessary Bicep deployment files. The following example creates a directory named `bicep-osm-aks-addon` and changes to the directory
+By using Visual Studio Code with a Bash terminal open, create a directory to store the necessary Bicep deployment files. The following example creates a directory named *bicep-osm-aks-addon* and changes to the directory:
```azurecli-interactive mkdir bicep-osm-aks-addon cd bicep-osm-aks-addon ```
-Next create both the main and parameters files, as shown in the following example.
+Next, create both the main file and the parameters file, as shown in the following example:
```azurecli-interactive touch osm.aks.bicep && touch osm.aks.parameters.json ```
-Open the `osm.aks.bicep` file and copy the following example content to it, then save the file.
+Open the *osm.aks.bicep* file and copy the following example content to it. Then save the file.
```azurecli-interactive // https://docs.microsoft.com/azure/aks/troubleshooting#what-naming-restrictions-are-enforced-for-aks-resources-and-parameters
resource aksCluster 'Microsoft.ContainerService/managedClusters@2021-03-01' = {
} ```
-Open the `osm.aks.parameters.json` file and copy the following example content to it. Add the deployment-specific parameters, then save the file.
+Open the *osm.aks.parameters.json* file and copy the following example content to it. Add the deployment-specific parameters, and then save the file.
> [!NOTE]
-> The `osm.aks.parameters.json` is an example template parameters file needed for the Bicep deployment. You will have to update the specified parameters specifically for your deployment environment. The specific parameter values used by this example needs the following parameters to be updated. They are the _clusterName_, _clusterDNSPrefix_, _k8Version_, and _sshPubKey_. To find a list of supported Kubernetes version in your region, please use the `az aks get-versions --location <region>` command.
+> The *osm.aks.parameters.json* file is an example template parameters file needed for the Bicep deployment. Update the parameters specifically for your deployment environment. The specific parameter values in this example need the following parameters to be updated: `clusterName`, `clusterDNSPrefix`, `k8Version`, and `sshPubKey`. To find a list of supported Kubernetes versions in your region, use the `az aks get-versions --location <region>` command.
```azurecli-interactive {
Open the `osm.aks.parameters.json` file and copy the following example content t
} ```
-### Deploy the Bicep file
+### Deploy the Bicep files
-To deploy the previously created Bicep files, open the terminal and authenticate to your Azure account for the Azure CLI using the `az login` command. Once authenticated to your Azure subscription, run the following commands for deployment.
+To deploy the previously created Bicep files, open the terminal and authenticate to your Azure account for the Azure CLI by using the `az login` command. After you're authenticated to your Azure subscription, run the following commands for deployment:
```azurecli-interactive az group create --name osm-bicep-test --location eastus2
az deployment group create \
--parameters @osm.aks.parameters.json ```
-When the deployment finishes, you should see a message indicating the deployment succeeded.
+When the deployment finishes, you should see a message that says the deployment succeeded.
-## Validate the AKS OSM add-on installation
+## Validate installation of the OSM add-on
-There are several commands to run to check all of the components of the AKS OSM add-on are enabled and running:
+You use several commands to check that all of the components of the OSM add-on are enabled and running.
-First we can query the add-on profiles of the cluster to check the enabled state of the add-ons installed. The following command should return "true".
+First, query the add-on profiles of the cluster to check the enabled state of the installed add-ons. The following command should return `true`:
```azurecli-interactive az aks list -g <my-osm-aks-cluster-rg> -o json | jq -r '.[].addonProfiles.openServiceMesh.enabled' ```
-The following `kubectl` commands will report the status of the osm-controller.
+The following `kubectl` commands will report the status of *osm-controller*:
```azurecli-interactive kubectl get deployments -n kube-system --selector app=osm-controller
kubectl get pods -n kube-system --selector app=osm-controller
kubectl get services -n kube-system --selector app=osm-controller ```
-## Accessing the AKS OSM add-on configuration
+## Access the OSM add-on configuration
-Currently you can access and configure the OSM controller configuration via the OSM MeshConfig resource, and you can view the OSM controller configuration settings via the CLI use the **kubectl** get command as shown below.
+You can configure the OSM controller via the OSM MeshConfig resource, and you can view the OSM controller's configuration settings via the Azure CLI. Use the `kubectl get` command as shown in the following example:
```azurecli-interactive kubectl get meshconfig osm-mesh-config -n kube-system -o yaml ```
-Output of the MeshConfig is shown in the following:
+Here's an example output of MeshConfig:
``` apiVersion: config.openservicemesh.io/v1alpha1
spec:
useHTTPSIngress: false ```
-Notice the **enablePermissiveTrafficPolicyMode** is configured to **true**. Permissive traffic policy mode in OSM is a mode where the [SMI](https://smi-spec.io/) traffic policy enforcement is bypassed. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programed on each Envoy proxy sidecar to allow communications between these services.
+Notice that `enablePermissiveTrafficPolicyMode` is configured to `true`. In OSM, permissive traffic policy mode bypasses [SMI](https://smi-spec.io/) traffic policy enforcement. In this mode, OSM automatically discovers services that are a part of the service mesh. The discovered services will have traffic policy rules programmed on each Envoy proxy sidecar to allow communications between these services.
> [!WARNING]
-> Before proceeding please verify that your permissive traffic policy mode is set to true, if not please change it to **true** using the command below
-
-```OSM Permissive Mode to True
-kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
-```
+> Before you proceed, verify that your permissive traffic policy mode is set to `true`. If it isn't, change it to `true` by using the following command:
+>
+> ```OSM Permissive Mode to True
+> kubectl patch meshconfig osm-mesh-config -n kube-system -p '{"spec":{"traffic":{"enablePermissiveTrafficPolicyMode":true}}}' --type=merge
+>```
## Clean up resources
-When the Azure resources are no longer needed, use the Azure CLI to delete the deployment test resource group.
+When you no longer need the Azure resources, use the Azure CLI to delete the deployment's test resource group:
``` az group delete --name osm-bicep-test ```
-Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh (OSM) add-on from your AKS cluster][osm-uninstall].
+Alternatively, you can uninstall the OSM add-on and the related resources from your cluster. For more information, see [Uninstall the Open Service Mesh add-on from your AKS cluster][osm-uninstall].
## Next steps
-This article showed you how to install the OSM add-on on an AKS cluster and verify it is installed and running. With the OSM add-on on your cluster you can [Deploy a sample application][osm-deploy-sample-app] or [Onboard an existing application][osm-onboard-app] to work with your OSM mesh.
+This article showed you how to install the OSM add-on on an AKS cluster and verify that it's installed and running. With the OSM add-on installed on your cluster, you can [deploy a sample application][osm-deploy-sample-app] or [onboard an existing application][osm-onboard-app] to work with your OSM mesh.
<!-- Links --> <!-- Internal -->
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
For new **minor** versions of Kubernetes:
* Users have **30 days** from version removal to upgrade to a supported minor version release to continue receiving support. For new **patch** versions of Kubernetes:
- * Because of the urgent nature of patch versions, they can be introduced into the service as they become available.
+ * Because of the urgent nature of patch versions, they can be introduced into the service as they become available. Once available, patches will have a two month minimum lifecycle.
* In general, AKS does not broadly communicate the release of new patch versions. However, AKS constantly monitors and validates available CVE patches to support them in AKS in a timely manner. If a critical patch is found or user action is required, AKS will notify users to upgrade to the newly available patch.
- * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support.
+ * Users have **30 days** from a patch release's removal from AKS to upgrade into a supported patch and continue receiving support. However, you will **no longer be able to create clusters or node pools once the version is deprecated/removed.**
### Supported versions policy exceptions
No. Once a version is deprecated/removed, you cannot create a cluster with that
No. You will not be allowed to add node pools of the deprecated version to your cluster. You can add node pools of a new version. However, this may require you to update the control plane first.
+**How often do you update patches?**
+
+Patches have a two month minimum lifecycle. To keep up to date when new patches are released, follow the [AKS Release Notes](https://github.com/Azure/AKS/releases).
+ ## Next steps For information on how to upgrade your cluster, see [Upgrade an Azure Kubernetes Service (AKS) cluster][aks-upgrade].
api-management Api Management Api Import Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-api-import-restrictions.md
description: Details of known issues and restrictions on Open API, WSDL, and WAD
documentationcenter: '' - -- Previously updated : 10/26/2021+ Last updated : 03/02/2022
When importing an API, you might encounter some restrictions or need to identify and rectify issues before you can successfully import. In this article, you'll learn: * API Management's behavior during OpenAPI import.
-* Import limitations, organized by the import format of the API.
-* How OpenAPI export works.
+* OpenAPI import limitations and how OpenAPI export works.
+* Requirements and limitations for WSDL and WADL import.
## API Management during OpenAPI import
For each operation, its:
## <a name="wsdl"> </a>WSDL
-You can create SOAP pass-through and SOAP-to-REST APIs with WSDL files.
+You can create [SOAP pass-through](import-soap-api.md) and [SOAP-to-REST](restify-soap-api.md) APIs with WSDL files.
### SOAP bindings - Only SOAP bindings of "document" and ΓÇ£literalΓÇ¥ encoding style are supported. - No support for ΓÇ£rpcΓÇ¥ style or SOAP-Encoding.
-### WSDL:Import
-Not supported. Instead, merge the imports into one document.
+### Unsupported directives
+`wsdl:import`, `xsd:import`, and `xsd:include` aren't supported. Instead, merge the dependencies into one document.
+
+For an open-source tool to resolve and merge `wsdl:import`, `xsd:import`, and `xsd:include` dependencies in a WSDL file, see this [GitHub repo](https://github.com/Azure-Samples/api-management-schema-import).
### Messages with multiple parts This message type is not supported.
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
The Azure portal provides commands to create self-hosted gateway resources in th
Consider [creating and deploying](https://www.kubernetesbyexample.com/) a self-hosted gateway into a separate namespace in production. ## Number of replicas
-The minimum number of replicas suitable for production is two.
+The minimum number of replicas suitable for production is three, preferably combined with [high-available scheduling of the instances](#high-availability).
By default, a self-hosted gateway is deployed with a **RollingUpdate** deployment [strategy](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy). Review the default values and consider explicitly setting the [maxUnavailable](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-unavailable) and [maxSurge](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#max-surge) fields, especially when you're using a high replica count.
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
Title: Import SOAP API using the Azure portal | Microsoft Docs
-description: Learn how to import a standard XML representation of a SOAP API, and then test the API in the Azure and Developer portals.
+ Title: Import SOAP API to Azure API Management using the portal | Microsoft Docs
+description: Learn how to import a SOAP API to Azure API Management as a WSDL specification. Then, test the API in the Azure portal.
Previously updated : 02/10/2022 Last updated : 03/01/2022
-# Import SOAP API
+# Import SOAP API to API Management
-This article shows how to import a standard XML representation of a SOAP API. The article also shows how to test the API Management API.
+This article shows how to import a WSDL specification, which is a standard XML representation of a SOAP API. The article also shows how to test the API in API Management.
In this article, you learn how to: > [!div class="checklist"]
-> * Import SOAP API
+> * Import a SOAP API
> * Test the API in the Azure portal + ## Prerequisites Complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md) [!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-navigate-to-instance.md)]
-## <a name="create-api"> </a>Import and publish a back-end API
-
-1. Navigate to your API Management service in the Azure portal and select **APIs** from the menu.
-2. Select **WSDL** from the **Add a new API** list.
-
- ![Soap api](./media/import-soap-api/wsdl-api.png)
-3. In the **WSDL specification**, enter the URL to where your SOAP API resides.
-4. The **SOAP pass-through** radio button is selected by default. With this selection, the API is going to be exposed as SOAP. Consumer has to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md).
-
- ![Screenshot shows the Create from W S D L dialog box where you can enter a W S D L specification.](./media/import-soap-api/pass-through.png)
-5. Press tab.
+## <a name="create-api"> </a>Import and publish a backend API
- The following fields get filled up with the info from the SOAP API: Display name, Name, Description.
-6. Add an API URL suffix. The suffix is a name that identifies this specific API in this API Management instance. It has to be unique in this API Management instance.
-7. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
+1. From the left menu, under the **APIs** section, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **WSDL**.
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the API Management instance, you are an administrator already, so you are subscribed to every product by default.
+ ![SOAP API](./media/import-soap-api/wsdl-api.png)
+1. In **WSDL specification**, enter the URL to your SOAP API, or click **Select a file** to select a local WSDL file.
+1. In **Import method**, **SOAP pass-through** is selected by default.
+ With this selection, the API is exposed as SOAP, and API consumers have to use SOAP rules. If you want to "restify" the API, follow the steps in [Import a SOAP API and convert it to REST](restify-soap-api.md).
- By default, each API Management instance comes with two sample products:
+ ![Create SOAP API from WDL specification](./media/import-soap-api/pass-through.png)
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab.
- * **Starter**
- * **Unlimited**
-8. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab. The settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
-9. Select **Create**.
+ For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
### Test the new API in the portal
-Operations can be called directly from the administrative portal, which provides a convenient way to view and test the operations of an API.
+Operations can be called directly from the portal, which provides a convenient way to view and test the operations of an API.
1. Select the API you created in the previous step. 2. Press the **Test** tab. 3. Select some operation.
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the API Management instance, you are an administrator already, so the key is filled in automatically.
+ The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
1. Press **Send**.
- Backend responds with **200 OK** and some data.
+ When the test is successful, the backend responds with **200 OK** and some data.
## Wildcard SOAP action
api-management Restify Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/restify-soap-api.md
Title: Import a SOAP API and convert to REST using the Azure portal | Microsoft Docs
-description: Learn how to import a SOAP API, convert it to REST with API Management, and then test the API in the Azure and Developer portals.
+ Title: Import SOAP API to Azure API Management and convert to REST using the portal | Microsoft Docs
+description: Learn how to import a SOAP API to Azure API Management as a WSDL specification and convert it to a REST API. Then, test the API in the Azure portal.
- -- Previously updated : 11/22/2017+ Last updated : 03/01/2022
-# Import a SOAP API and convert to REST
+# Import SOAP API to API Management and convert to REST
-This article shows how to import a SOAP API and convert it to REST. The article also shows how to test the APIM API.
+This article shows how to import a SOAP API as a WSDL specification and then convert it to a REST API. The article also shows how to test the API in API Management.
In this article, you learn how to: > [!div class="checklist"] > * Import a SOAP API and convert to REST > * Test the API in the Azure portal
-> * Test the API in the Developer portal
+ ## Prerequisites
Complete the following quickstart: [Create an Azure API Management instance](get
## <a name="create-api"> </a>Import and publish a back-end API
-1. Select **APIs** from under **API MANAGEMENT**.
-2. Select **WSDL** from the **Add a new API** list.
+1. From the left menu, under the **APIs** section, select **APIs** > **+ Add API**.
+1. Under **Create from definition**, select **WSDL**.
![SOAP API](./media/restify-soap-api/wsdl-api.png)
-3. In the **WSDL specification**, enter the URL to where your SOAP API resides.
-4. Click **SOAP to REST** radio button. When this option is clicked, APIM attempts to make an automatic transformation between XML and JSON. In this case consumers should be calling the API as a RESTful API, which returns JSON. APIM is converting each request into a SOAP call.
+1. In **WSDL specification**, enter the URL to your SOAP API, or click **Select a file** to select a local WSDL file.
+1. In **Import method**, select **SOAP to REST**.
+ When this option is selected, API Management attempts to make an automatic transformation between XML and JSON. In this case, consumers should call the API as a RESTful API, which returns JSON. API Management converts each request to a SOAP call.
![SOAP to REST](./media/restify-soap-api/soap-to-rest.png)
-5. Press tab.
-
- The following fields get filled up with the info from the SOAP API: Display name, Name, Description.
-6. Add an API URL suffix. The suffix is a name that identifies this specific API in this APIM instance. It has to be unique in this APIM instance.
-9. Publish the API by associating the API with a product. In this case, the "*Unlimited*" product is used. If you want for the API to be published and be available to developers, add it to a product. You can do it during API creation or set it later.
-
- Products are associations of one or more APIs. You can include a number of APIs and offer them to developers through the developer portal. Developers must first subscribe to a product to get access to the API. When they subscribe, they get a subscription key that is good for any API in that product. If you created the APIM instance, you are an administrator already, so you are subscribed to every product by default.
-
- By default, each API Management instance comes with two sample products:
+1. The following fields are filled automatically with information from the SOAP API: **Display name**, **Name**, **Description**.
+1. Enter other API settings. You can set the values during creation or configure them later by going to the **Settings** tab.
- * **Starter**
- * **Unlimited**
-10. Select **Create**.
+ For more information about API settings, see [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
+1. Select **Create**.
## Test the new API in the Azure portal Operations can be called directly from the Azure portal, which provides a convenient way to view and test the operations of an API. 1. Select the API you created in the previous step.
-2. Press the **Test** tab.
-3. Select some operation.
+2. Select the **Test** tab.
+3. Select an operation.
- The page displays fields for query parameters and fields for the headers. One of the headers is "Ocp-Apim-Subscription-Key", for the subscription key of the product that is associated with this API. If you created the APIM instance, you are an administrator already, so the key is filled in automatically.
+ The page displays fields for query parameters and fields for the headers. One of the headers is **Ocp-Apim-Subscription-Key**, for the subscription key of the product that is associated with this API. If you created the API Management instance, you're an administrator already, so the key is filled in automatically.
1. Press **Send**.
- Backend responds with **200 OK** and some data.
+ When the test is successful, the backend responds with **200 OK** and some data.
[!INCLUDE [api-management-navigate-to-instance.md](../../includes/api-management-append-apis.md)]
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported .NET Core versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep DOTNET
+az webapp list-runtimes --os linux | grep DOTNET
``` ::: zone-end
app-service Configure Language Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md
az webapp config show --name <app-name> --resource-group <resource-group-name> -
To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep java
+az webapp list-runtimes --os windows | grep java
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Java versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
+az webapp list-runtimes --os linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
``` ::: zone-end
az webapp list-runtimes --linux | grep "JAVA\|TOMCAT\|JBOSSEAP"
### Build Tools #### Maven+ With the [Maven Plugin for Azure Web Apps](https://github.com/microsoft/azure-maven-plugins/tree/develop/azure-webapp-maven-plugin), you can prepare your Maven Java project for Azure Web App easily with one command in your project root: ```shell
mvn com.microsoft.azure:azure-webapp-maven-plugin:2.2.0:config
``` This command adds a `azure-webapp-maven-plugin` plugin and related configuration by prompting you to select an existing Azure Web App or create a new one. Then you can deploy your Java app to Azure using the following command:+ ```shell mvn package azure-webapp:deploy ``` Here is a sample configuration in `pom.xml`:+ ```xml <plugin> <groupId>com.microsoft.azure</groupId>
Here is a sample configuration in `pom.xml`:
``` #### Gradle+ 1. Setup the [Gradle Plugin for Azure Web Apps](https://github.com/microsoft/azure-gradle-plugins/tree/master/azure-webapp-gradle-plugin) by adding the plugin to your `build.gradle`:+ ```groovy plugins { id "com.microsoft.azure.azurewebapp" version "1.2.0"
Here is a sample configuration in `pom.xml`:
1. Configure your Web App details, corresponding Azure resources will be created if not exist. Here is a sample configuration, for details, refer to this [document](https://github.com/microsoft/azure-gradle-plugins/wiki/Webapp-Configuration).+ ```groovy azurewebapp { subscription = '<your subscription id>'
Here is a sample configuration, for details, refer to this [document](https://gi
``` 1. Deploy with one command.+ ```shell gradle azureWebAppDeploy ```
-
+ ### IDEs+ Azure provides seamless Java App Service development experience in popular Java IDEs, including:+ - *VS Code*: [Java Web Apps with Visual Studio Code](https://code.visualstudio.com/docs/java/java-webapp#_deploy-web-apps-to-the-cloud) - *IntelliJ IDEA*:[Create a Hello World web app for Azure App Service using IntelliJ](/azure/developer/java/toolkit-for-intellij/create-hello-world-web-app) - *Eclipse*:[Create a Hello World web app for Azure App Service using Eclipse](/azure/developer/java/toolkit-for-eclipse/create-hello-world-web-app) ### Kudu API+ #### Java SE
-To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
+To deploy .jar files to Java SE, use the `/api/publish/` endpoint of the Kudu site. For more information on this API, see [this documentation](./deploy-zip.md#deploy-warjarear-packages).
> [!NOTE]
-> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
+> Your .jar application must be named `app.jar` for App Service to identify and run your application. The Maven Plugin (mentioned above) will automatically rename your application for you during deployment. If you do not wish to rename your JAR to *app.jar*, you can upload a shell script with the command to run your .jar app. Paste the absolute path to this script in the [Startup File](./faq-app-service-linux.yml) textbox in the Configuration section of the portal. The startup script does not run from the directory into which it is placed. Therefore, always use absolute paths to reference files in your startup script (for example: `java -jar /home/myapp/myapp.jar`).
#### Tomcat
Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-
Enable [application logging](troubleshoot-diagnostic-logs.md#enable-application-logging-linuxcontainer) through the Azure portal or [Azure CLI](/cli/azure/webapp/log#az_webapp_log_config) to configure App Service to write your application's standard console output and standard console error streams to the local filesystem or Azure Blob Storage. If you need longer retention, configure the application to write output to a Blob storage container. Your Java and Tomcat app logs can be found in the */home/LogFiles/Application/* directory.
-Azure Blob Storage logging for Linux based App Services can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
+Azure Blob Storage logging for Linux based App Services can only be configured using [Azure Monitor](./troubleshoot-diagnostic-logs.md#send-logs-to-azure-monitor)
::: zone-end
To configure the app setting from the Maven plugin, add setting/value tags in th
::: zone pivot="platform-windows" > [!NOTE]
-> You do not need to create a web.config file when using Tomcat on Windows App Service.
+> You do not need to create a web.config file when using Tomcat on Windows App Service.
::: zone-end
To enable via the Azure CLI, you will need to create an Application Insights res
> To retrieve a list of other locations, run `az account list-locations`. ::: zone pivot="platform-windows"
-
+ 3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step. ```azurecli
To enable via the Azure CLI, you will need to create an Application Insights res
::: zone-end ::: zone pivot="platform-linux"
-
+ 3. Set the instrumentation key, connection string, and monitoring agent version as app settings on the web app. Replace `<instrumentationKey>` and `<connectionString>` with the values from the previous step. ```azurecli
To enable via the Azure CLI, you will need to create an Application Insights res
5. Upload the unpacked NewRelic Java agent files into a directory under */home/site/wwwroot/apm*. The files for your agent should be in */home/site/wwwroot/apm/newrelic*. 6. Modify the YAML file at */home/site/wwwroot/apm/newrelic/newrelic.yml* and replace the placeholder license value with your own license key. 7. In the Azure portal, browse to your application in App Service and create a new Application Setting.
-
+ - For **Java SE** apps, create an environment variable named `JAVA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`. - For **Tomcat**, create an environment variable named `CATALINA_OPTS` with the value `-javaagent:/home/site/wwwroot/apm/newrelic/newrelic.jar`. ::: zone-end
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
### Configure AppDynamics
To enable via the Azure CLI, you will need to create an Application Insights res
::: zone-end > [!NOTE]
-> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
+> If you already have an environment variable for `JAVA_OPTS` or `CATALINA_OPTS`, append the `-javaagent:/...` option to the end of the current value.
## Configure data sources
Next, determine if the data source should be available to one application or to
#### Shared server-level resources
-Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
+Tomcat installations on App Service on Windows exist in shared space on the App Service Plan. You can't directly modify a Tomcat installation for server-wide configuration. To make server-level configuration changes to your Tomcat installation, you must copy Tomcat to a local folder, in which you can modify Tomcat's configuration.
##### Automate creating custom Tomcat on app start
Finally, place the driver JARs in the Tomcat classpath and restart your App Serv
There are three core steps when [registering a data source with JBoss EAP](https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.0/html/configuration_guide/datasource_management): uploading the JDBC driver, adding the JDBC driver as a module, and registering the module. App Service is a stateless hosting service, so the configuration commands for adding and registering the data source module must be scripted and applied as the container starts.
-1. Obtain your database's JDBC driver.
+1. Obtain your database's JDBC driver.
2. Create an XML module definition file for the JDBC driver. The example shown below is a module definition for PostgreSQL. ```xml
There are three core steps when [registering a data source with JBoss EAP](https
data-source add --name=postgresDS --driver-name=postgres --jndi-name=java:jboss/datasources/postgresDS --connection-url=${POSTGRES_CONNECTION_URL,env.POSTGRES_CONNECTION_URL:jdbc:postgresql://db:5432/postgres} --user-name=${POSTGRES_SERVER_ADMIN_FULL_NAME,env.POSTGRES_SERVER_ADMIN_FULL_NAME:postgres} --password=${POSTGRES_SERVER_ADMIN_PASSWORD,env.POSTGRES_SERVER_ADMIN_PASSWORD:example} --use-ccm=true --max-pool-size=5 --blocking-timeout-wait-millis=5000 --enabled=true --driver-class=org.postgresql.Driver --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --jta=true --use-java-context=true --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker ```
-1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
+1. Create a startup script, `startup_script.sh` that calls the JBoss CLI commands. The example below shows how to call your `jboss-cli-commands.cli`. Later you will configure App Service to run this script when the container starts.
```bash $JBOSS_HOME/bin/jboss-cli.sh --connect --file=/home/site/deployments/tools/jboss-cli-commands.cli
If you choose to pin the minor version, you will need to periodically update the
::: zone pivot="platform-linux" ## JBoss EAP App Service Plans+ <a id="jboss-eap-hardware-options"></a> JBoss EAP is only available on the Premium v3 and Isolated v2 App Service Plan types. Customers that created a JBoss EAP site on a different tier during the public preview should scale up to Premium or Isolated hardware tier to avoid unexpected behavior.
Microsoft and Adoptium builds of OpenJDK are provided and supported on App Servi
| Java 11 | 11.0.13 (MSFT) | 11.0.13 (MSFT) | | Java 17 | 17.0.1 (MSFT) | 17.0.1 (MSFT) |
-\* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK.
+\* In following releases, Java 8 on Linux will be distributed from Adoptium builds of the OpenJDK.
If you are [pinned](#choosing-a-java-runtime-version) to an older minor version of Java your site may be using the [Zulu for Azure](https://www.azul.com/downloads/azure-only/zulu/) binaries provided through [Azul Systems](https://www.azul.com/). You can continue to use these binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you periodically update your Web Apps to a later version of Java.
Supported JDKs are automatically patched on a quarterly basis in January, April,
Patches and fixes for major security vulnerabilities will be released as soon as they become available in Microsoft builds of the OpenJDK. A "major" vulnerability is defined by a base score of 9.0 or higher on the [NIST Common Vulnerability Scoring System, version 2](https://nvd.nist.gov/vuln-metrics/cvss).
-Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. See the [official Tomcat site](https://tomcat.apache.org/whichversion.html) for more information.
+Tomcat 8.0 has reached [End of Life (EOL) as of September 30, 2018](https://tomcat.apache.org/tomcat-80-eol.html). While the runtime is still available on Azure App Service, Azure will not apply security updates to Tomcat 8.0. If possible, migrate your applications to Tomcat 8.5 or 9.0. Both Tomcat 8.5 and 9.0 are available on Azure App Service. See the [official Tomcat site](https://tomcat.apache.org/whichversion.html) for more information.
-Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app runnning on Java 7, please upgrade to Java 8 or 11 before July 29th.
+Community support for Java 7 will terminate on July 29th, 2022 and [Java 7 will be retired from App Service](https://azure.microsoft.com/updates/transition-to-java-11-or-8-by-29-july-2022/) at that time. If you have a web app runnning on Java 7, please upgrade to Java 8 or 11 before July 29th.
### Deprecation and retirement
app-service Configure Language Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-nodejs.md
az webapp config appsettings list --name <app-name> --resource-group <resource-g
To show all supported Node.js versions, navigate to `https://<sitename>.scm.azurewebsites.net/api/diagnostics/runtime` or run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep node
+az webapp list-runtimes --os windows | grep node
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Node.js versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep NODE
+az webapp list-runtimes --os linux | grep NODE
``` ::: zone-end
To set your app to a [supported Node.js version](#show-nodejs-version), run the
az webapp config appsettings set --name <app-name> --resource-group <resource-group-name> --settings WEBSITE_NODE_DEFAULT_VERSION="~16" ```
-> [!NOTE]
+> [!NOTE]
> This example uses the recommended "tilde syntax" to target the latest available version of Node.js 16 runtime on App Service.
->
+>
>Since the runtime is regularly patched and updated by the platform it's not recommended to target a specific minor version/patch as these are not guaranteed to be available due to potential security risks. > [!NOTE]
The Node.js containers come with [PM2](https://pm2.keymetrics.io/), a production
|[Run npm start](#run-npm-start)|Development use only.| |[Run custom command](#run-custom-command)|Either development or staging.| - ### Run with PM2 The container automatically starts your app with PM2 when one of the common Node.js files is found in your project:
To use a custom *package.json* in your project, run the following command in the
az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<filename>.json" ``` - ## Debug remotely > [!NOTE] > Remote debugging is currently in Preview.
-You can debug your Node.js app remotely in [Visual Studio Code](https://code.visualstudio.com/) if you configure it to [run with PM2](#run-with-pm2), except when you run it using a *.config.js, *.yml, or *.yaml*.
+You can debug your Node.js app remotely in [Visual Studio Code](https://code.visualstudio.com/) if you configure it to [run with PM2](#run-with-pm2), except when you run it using a *.config.js,*.yml, or *.yaml*.
In most cases, no extra configuration is required for your app. If your app is run with a *process.json* file (default or custom), it must have a `script` property in the JSON root. For example:
if (req.secure) {
::: zone-end - ::: zone pivot="platform-linux" ## Monitor with Application Insights Application Insights allows you to monitor your application's performance, exceptions, and usage without making any code changes. To attach the App Insights agent, go to your web app in the Portal and select **Application Insights** under **Settings**, then select **Turn on Application Insights**. Next, select an existing App Insights resource or create a new one. Finally, select **Apply** at the bottom. To instrument your web app using PowerShell, please see [these instructions](../azure-monitor/app/azure-web-apps-nodejs.md#enable-through-powershell)
-This agent will monitor your server-side Node.js application. To monitor your client-side JavaScript, [add the JavaScript SDK to your project](../azure-monitor/app/javascript.md).
+This agent will monitor your server-side Node.js application. To monitor your client-side JavaScript, [add the JavaScript SDK to your project](../azure-monitor/app/javascript.md).
For more information, see the [Application Insights extension release notes](../azure-monitor/app/web-app-extension-release-notes.md).
When a working Node.js app behaves differently in App Service or has errors, try
- [Access the log stream](#access-diagnostic-logs). - Test the app locally in production mode. App Service runs your Node.js apps in production mode, so you need to make sure that your project works as expected in production mode locally. For example:
- - Depending on your *package.json*, different packages may be installed for production mode (`dependencies` vs. `devDependencies`).
- - Certain web frameworks may deploy static files differently in production mode.
- - Certain web frameworks may use custom startup scripts when running in production mode.
+ - Depending on your *package.json*, different packages may be installed for production mode (`dependencies` vs. `devDependencies`).
+ - Certain web frameworks may deploy static files differently in production mode.
+ - Certain web frameworks may use custom startup scripts when running in production mode.
- Run your app in App Service in development mode. For example, in [MEAN.js](https://meanjs.org/), you can set your app to development mode in runtime by [setting the `NODE_ENV` app setting](configure-common.md). ::: zone pivot="platform-windows"
If you deploy your files by using Git, or by using ZIP deployment [with build au
- Your project root has a *package.json* that defines a `start` script that contains the path of a JavaScript file. - Your project root has either a *server.js* or an *app.js*.
-The generated *web.config* is tailored to the detected start script. For other deployment methods, add this *web.config* manually. Make sure the file is formatted properly.
+The generated *web.config* is tailored to the detected start script. For other deployment methods, add this *web.config* manually. Make sure the file is formatted properly.
If you use [ZIP deployment](deploy-zip.md) (through Visual Studio Code, for example), be sure to [enable build automation](deploy-zip.md#enable-build-automation-for-zip-deploy) because it's not enabled by default. [`az webapp up`](/cli/azure/webapp#az_webapp_up) uses ZIP deployment with build automation enabled.
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes | grep php
+az webapp list-runtimes --os windows | grep php
``` ::: zone-end
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep PHP
+az webapp list-runtimes --os linux | grep PHP
``` ::: zone-end
Commit all your changes and deploy your code using Git, or Zip deploy [with buil
## Run Grunt/Bower/Gulp
-If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
+If you want App Service to run popular automation tools at deployment time, such as Grunt, Bower, or Gulp, you need to supply a [custom deployment script](https://github.com/projectkudu/kudu/wiki/Custom-Deployment-Script). App Service runs this script when you deploy with Git, or with [Zip deployment](deploy-zip.md) with [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy).
To enable your repository to run these tools, you need to add them to the dependencies in *package.json.* For example:
getenv("DB_HOST")
The web framework of your choice may use a subdirectory as the site root. For example, [Laravel](https://laravel.com/), uses the *public/* subdirectory as the site root.
-To customize the site root, set the virtual application path for the app by using the [`az resource update`](/cli/azure/resource#az_resource_update) command. The following example sets the site root to the *public/* subdirectory in your repository.
+To customize the site root, set the virtual application path for the app by using the [`az resource update`](/cli/azure/resource#az_resource_update) command. The following example sets the site root to the *public/* subdirectory in your repository.
```azurecli-interactive az resource update --name web --resource-group <group-name> --namespace Microsoft.Web --resource-type config --parent sites/<app-name> --set properties.virtualApplications[0].physicalPath="site\wwwroot\public" --api-version 2015-06-01 ```
-By default, Azure App Service points the root virtual application path (_/_) to the root directory of the deployed application files (_sites\wwwroot_).
+By default, Azure App Service points the root virtual application path (*/*) to the root directory of the deployed application files (*sites\wwwroot*).
::: zone-end
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
Navigate to the Kudu console (`https://<app-name>.scm.azurewebsites.net/DebugConsole`) and navigate to `d:\home\site`.
-Create a directory in `d:\home\site` called `ini`, then create an *.ini* file in the `d:\home\site\ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
+Create a directory in `d:\home\site` called `ini`, then create an *.ini* file in the `d:\home\site\ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
For example, to change the value of [expose_php](https://php.net/manual/ini.core.php#ini.expose-php) run the following commands:
az webapp config appsettings set --name <app-name> --resource-group <resource-gr
Navigate to the web SSH session with your Linux container (`https://<app-name>.scm.azurewebsites.net/webssh/host`).
-Create a directory in `/home/site` called `ini`, then create an *.ini* file in the `/home/site/ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
+Create a directory in `/home/site` called `ini`, then create an *.ini* file in the `/home/site/ini` directory (for example, *settings.ini)* with the directives you want to customize. Use the same syntax you would use in a *php.ini* file.
> [!TIP]
-> In the built-in Linux containers in App Service, */home* is used as persisted shared storage.
+> In the built-in Linux containers in App Service, */home* is used as persisted shared storage.
> For example, to change the value of [expose_php](https://php.net/manual/ini.core.php#ini.expose-php) run the following commands:
When a working PHP app behaves differently in App Service or has errors, try the
- [Access the log stream](#access-diagnostic-logs). - Test the app locally in production mode. App Service runs your app in production mode, so you need to make sure that your project works as expected in production mode locally. For example:
- - Depending on your *composer.json*, different packages may be installed for production mode (`require` vs. `require-dev`).
- - Certain web frameworks may deploy static files differently in production mode.
- - Certain web frameworks may use custom startup scripts when running in production mode.
+ - Depending on your *composer.json*, different packages may be installed for production mode (`require` vs. `require-dev`).
+ - Certain web frameworks may deploy static files differently in production mode.
+ - Certain web frameworks may use custom startup scripts when running in production mode.
- Run your app in App Service in debug mode. For example, in [Laravel](https://laravel.com/), you can configure your app to output debug messages in production by [setting the `APP_DEBUG` app setting to `true`](configure-common.md#configure-app-settings). ::: zone pivot="platform-linux"
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- **Azure CLI**: you have two options.
- - Run commands in the [Azure Cloud Shell](../cloud-shell/overview.md).
- - Run commands locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az_login).
-
+ - Run commands in the [Azure Cloud Shell](../cloud-shell/overview.md).
+ - Run commands locally by installing the latest version of the [Azure CLI](/cli/azure/install-azure-cli), then sign in to Azure using [az login](/cli/azure/reference-index#az_login).
+ > [!NOTE] > Linux is currently the recommended option for running Python apps in App Service. For information on the Windows option, see [Python on the Windows flavor of App Service](/visualstudio/python/managing-python-on-azure-app-service).
You can use either the [Azure portal](https://portal.azure.com) or the Azure CLI
- **Azure CLI**:
- - Show the current Python version with [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show):
-
- ```azurecli
- az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
- ```
-
- Replace `<resource-group-name>` and `<app-name>` with the names appropriate for your web app.
-
- - Set the Python version with [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set)
-
- ```azurecli
- az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.7"
- ```
-
- - Show all Python versions that are supported in Azure App Service with [az webapp list-runtimes](/cli/azure/webapp#az_webapp_list_runtimes):
-
- ```azurecli
- az webapp list-runtimes --linux | grep PYTHON
- ```
-
+ - Show the current Python version with [az webapp config show](/cli/azure/webapp/config#az_webapp_config_show):
+
+ ```azurecli
+ az webapp config show --resource-group <resource-group-name> --name <app-name> --query linuxFxVersion
+ ```
+
+ Replace `<resource-group-name>` and `<app-name>` with the names appropriate for your web app.
+
+ - Set the Python version with [az webapp config set](/cli/azure/webapp/config#az_webapp_config_set)
+
+ ```azurecli
+ az webapp config set --resource-group <resource-group-name> --name <app-name> --linux-fx-version "PYTHON|3.7"
+ ```
+
+ - Show all Python versions that are supported in Azure App Service with [az webapp list-runtimes](/cli/azure/webapp#az_webapp_list_runtimes):
+
+ ```azurecli
+ az webapp list-runtimes --os linux | grep PYTHON
+ ```
+ You can run an unsupported version of Python by building your own container image instead. For more information, see [use a custom Docker image](tutorial-custom-container.md?pivots=container-linux). <!-- <a> element here to preserve external links-->
App Service's build system, called Oryx, performs the following steps when you d
1. Run custom post-build script if specified by the `POST_BUILD_COMMAND` setting. (Again, the script can run other Python and Node.js scripts, pip and npm commands, and Node-based tools.)
-By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty.
+By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTSTATIC` settings are empty.
- To disable running collectstatic when building Django apps, set the `DISABLE_COLLECTSTATIC` setting to true.
By default, the `PRE_BUILD_COMMAND`, `POST_BUILD_COMMAND`, and `DISABLE_COLLECTS
- To run post-build commands, set the `POST_BUILD_COMMAND` setting to contain either a command, such as `echo Post-build command`, or a path to a script file relative to your project root, such as `scripts/postbuild.sh`. All commands must use relative paths to the project root folder.
-For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
+For additional settings that customize build automation, see [Oryx configuration](https://github.com/microsoft/Oryx/blob/master/doc/configuration.md).
To access the build and deployment logs, see [Access deployment logs](#access-deployment-logs).
For more information on how App Service runs and builds Python apps in Linux, se
> [!NOTE] > The `PRE_BUILD_SCRIPT_PATH` and `POST_BUILD_SCRIPT_PATH` settings are identical to `PRE_BUILD_COMMAND` and `POST_BUILD_COMMAND` and are supported for legacy purposes.
->
+>
> A setting named `SCM_DO_BUILD_DURING_DEPLOYMENT`, if it contains `true` or 1, triggers an Oryx build happens during deployment. The setting is true when deploying using git, the Azure CLI command `az webapp up`, and Visual Studio Code. > [!NOTE]
For more information on how App Service runs and builds Python apps in Linux, se
Existing web applications can be redeployed to Azure as follows: 1. **Source repository**: Maintain your source code in a suitable repository like GitHub, which enables you to set up continuous deployment later in this process.
- 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
+ 1. Your *requirements.txt* file must be at the root of your repository for App Service to automatically install the necessary packages.
1. **Database**: If your app depends on a database, provision the necessary resources on Azure as well. See [Tutorial: Deploy a Django web app with PostgreSQL - create a database](tutorial-python-postgresql-app.md#3-create-postgres-database-in-azure) for an example.
If your Django web app includes static front-end files, first follow the instruc
For App Service, you then make the following modifications:
-1. Consider using environment variables (for local development) and App Settings (when deploying to the cloud) to dynamically set the Django `STATIC_URL` and `STATIC_ROOT` variables. For example:
+1. Consider using environment variables (for local development) and App Settings (when deploying to the cloud) to dynamically set the Django `STATIC_URL` and `STATIC_ROOT` variables. For example:
```python STATIC_URL = os.environ.get("DJANGO_STATIC_URL", "/static/")
When deployed to App Service, Python apps run within a Linux Docker container th
This container has the following characteristics: - Apps are run using the [Gunicorn WSGI HTTP Server](https://gunicorn.org/), using the additional arguments `--bind=0.0.0.0 --timeout 600`.
- - You can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file) (docs.gunicorn.org). You can alternately [customize the startup command](#customize-startup-command).
+ - You can provide configuration settings for Gunicorn through a *gunicorn.conf.py* file in the project root, as described on [Gunicorn configuration overview](https://docs.gunicorn.org/en/stable/configure.html#configuration-file) (docs.gunicorn.org). You can alternately [customize the startup command](#customize-startup-command).
- - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org).
+ - To protect your web app from accidental or deliberate DDOS attacks, Gunicorn is run behind an Nginx reverse proxy as described on [Deploying Gunicorn](https://docs.gunicorn.org/en/latest/deploy.html) (docs.gunicorn.org).
- By default, the base container image includes only the Flask web framework, but the container supports other frameworks that are WSGI-compliant and compatible with Python 3.6+, such as Django.
This container has the following characteristics:
The *requirements.txt* file *must* be in the project root for dependencies to be installed. Otherwise, the build process reports the error: "Could not find setup.py or requirements.txt; Not running pip install." If you encounter this error, check the location of your requirements file. -- App Service automatically defines an environment variable named `WEBSITE_HOSTNAME` with the web app's URL, such as `msdocs-hello-world.azurewebsites.net`. It also defines `WEBSITE_SITE_NAME` with the name of your app, such as `msdocs-hello-world`.
-
+- App Service automatically defines an environment variable named `WEBSITE_HOSTNAME` with the web app's URL, such as `msdocs-hello-world.azurewebsites.net`. It also defines `WEBSITE_SITE_NAME` with the name of your app, such as `msdocs-hello-world`.
+ - npm and Node.js are installed in the container so you can run Node-based build tools, such as yarn. ## Container startup process
If your main app module is contained in a different file, use a different name f
### Default behavior
-If the App Service doesn't find a custom command, a Django app, or a Flask app, then it runs a default read-only app, located in the _opt/defaultsite_ folder and shown in the following image.
+If the App Service doesn't find a custom command, a Django app, or a Flask app, then it runs a default read-only app, located in the *opt/defaultsite* folder and shown in the following image.
If you deployed code and still see the default app, see [Troubleshooting - App doesn't appear](#app-doesnt-appear).
To specify a startup command or command file:
```azurecli az webapp config set --resource-group <resource-group-name> --name <app-name> --startup-file "<custom-command>" ```
-
+ Replace `<custom-command>` with either the full text of your startup command or the name of your startup command file.
-
+ App Service ignores any errors that occur when processing a custom startup command or file, then continues its startup process by looking for Django and Flask apps. If you don't see the behavior you expect, check that your startup command or file is error-free and that a startup command file is deployed to App Service along with your app code. You can also check the [Diagnostic logs](#access-diagnostic-logs) for additional information. Also check the app's **Diagnose and solve problems** page on the [Azure portal](https://portal.azure.com). ### Example startup commands -- **Added Gunicorn arguments**: The following example adds the `--workers=4` to a Gunicorn command line for starting a Django app:
+- **Added Gunicorn arguments**: The following example adds the `--workers=4` to a Gunicorn command line for starting a Django app:
```bash # <module-path> is the relative path to the folder that contains the module # that contains wsgi.py; <module> is the name of the folder containing wsgi.py. gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi
- ```
+ ```
For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). - **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
- ```bash
+ ```bash
# '-' for the log files means stdout for --access-logfile and stderr for --error-logfile. gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi --access-logfile '-' --error-logfile '-'
- ```
+ ```
These logs will appear in the [App Service log stream](#access-diagnostic-logs). For more information, see [Gunicorn logging](https://docs.gunicorn.org/en/stable/settings.html#logging) (docs.gunicorn.org).
-
+ - **Custom Flask main module**: by default, App Service assumes that a Flask app's main module is *application.py* or *app.py*. If your main module uses a different name, then you must customize the startup command. For example, if you have a Flask app whose main module is *hello.py* and the Flask app object in that file is named `myapp`, then the command is as follows: ```bash gunicorn --bind=0.0.0.0 --timeout 600 hello:myapp ```
-
+ If your main module is in a subfolder, such as `website`, specify that folder with the `--chdir` argument:
-
+ ```bash gunicorn --bind=0.0.0.0 --timeout 600 --chdir website hello:myapp ```
-
+ - **Use a non-Gunicorn server**: To use a different web server, such as [aiohttp](https://aiohttp.readthedocs.io/en/stable/web_quickstart.html), use the appropriate command as the startup command or in the startup command file: ```bash
The following sections provide additional guidance for specific issues.
- **You see the default app after deploying your own app code.** The [default app](#default-behavior) appears because you either haven't deployed your app code to App Service, or App Service failed to find your app code and ran the default app instead.
- - Restart the App Service, wait 15-20 seconds, and check the app again.
-
- - Be sure you're using App Service for Linux rather than a Windows-based instance. From the Azure CLI, run the command `az webapp show --resource-group <resource-group-name> --name <app-name> --query kind`, replacing `<resource-group-name>` and `<app-name>` accordingly. You should see `app,linux` as output; otherwise, recreate the App Service and choose Linux.
-
- - Use [SSH](#open-ssh-session-in-browser) to connect directly to the App Service container and verify that your files exist under *site/wwwroot*. If your files don't exist, use the following steps:
+ - Restart the App Service, wait 15-20 seconds, and check the app again.
+
+ - Be sure you're using App Service for Linux rather than a Windows-based instance. From the Azure CLI, run the command `az webapp show --resource-group <resource-group-name> --name <app-name> --query kind`, replacing `<resource-group-name>` and `<app-name>` accordingly. You should see `app,linux` as output; otherwise, recreate the App Service and choose Linux.
+
+ - Use [SSH](#open-ssh-session-in-browser) to connect directly to the App Service container and verify that your files exist under *site/wwwroot*. If your files don't exist, use the following steps:
1. Create an app setting named `SCM_DO_BUILD_DURING_DEPLOYMENT` with the value of 1, redeploy your code, wait a few minutes, then try to access the app again. For more information on creating app settings, see [Configure an App Service app in the Azure portal](configure-common.md). 1. Review your deployment process, [check the deployment logs](#access-deployment-logs), correct any errors, and redeploy the app.
-
- - If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
+
+ - If your files exist, then App Service wasn't able to identify your specific startup file. Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
- <a name="service-unavailable"></a>**You see the message "Service Unavailable" in the browser.** The browser has timed out waiting for a response from App Service, which indicates that App Service started the Gunicorn server, but the app itself did not start. This condition could indicate that the Gunicorn arguments are incorrect, or that there's an error in the app code.
- - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
+ - Refresh the browser, especially if you're using the lowest pricing tiers in your App Service Plan. The app may take longer to start up when using free tiers, for example, and becomes responsive after you refresh the browser.
- - Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
+ - Check that your app is structured as App Service expects for [Django](#django-app) or [Flask](#flask-app), or use a [custom startup command](#customize-startup-command).
- - Examine the [app log stream](#access-diagnostic-logs) for any error messages. The logs will show any errors in the app code.
+ - Examine the [app log stream](#access-diagnostic-logs) for any error messages. The logs will show any errors in the app code.
#### Could not find setup.py or requirements.txt - **The log stream shows "Could not find setup.py or requirements.txt; Not running pip install."**: The Oryx build process failed to find your *requirements.txt* file.
- - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make site the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root.
+ - Connect to the web app's container via [SSH](#open-ssh-session-in-browser) and verify that *requirements.txt* is named correctly and exists directly under *site/wwwroot*. If it doesn't exist, make site the file exists in your repository and is included in your deployment. If it exists in a separate folder, move it to the root.
#### ModuleNotFoundError when app starts
If you're encountering this error with the sample in [Tutorial: Deploy a Django
- **You see the message, "Fatal SSL Connection is Required"**: Check any usernames and passwords used to access resources (such as databases) from within the app.
-## More resources:
+## More resources
- [Tutorial: Python app with PostgreSQL](tutorial-python-postgresql-app.md) - [Tutorial: Deploy from private container repository](tutorial-custom-container.md?pivots=container-linux)
app-service Configure Language Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-ruby.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported Ruby versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --linux | grep RUBY
+az webapp list-runtimes --os linux | grep RUBY
``` You can run an unsupported version of Ruby by building your own container image instead. For more information, see [use a custom Docker image](tutorial-custom-container.md?pivots=container-linux).
az webapp config set --resource-group <resource-group-name> --name <app-name> --
> [!NOTE] > If you see errors similar to the following during deployment time:
+>
> ``` > Your Ruby version is 2.3.3, but your Gemfile specified 2.3.1 > ```
+>
> or
+>
> ``` > rbenv: version `2.3.1' is not installed > ```
+>
> It means that the Ruby version configured in your project is different than the version that's installed in the container you're running (`2.3.3` in the example above). In the example above, check both *Gemfile* and *.ruby-version* and verify that the Ruby version is not set, or is set to the version that's installed in the container you're running (`2.3.3` in the example above). ## Access environment variables
ENV['WEBSITE_SITE_NAME']
When you deploy a [Git repository](deploy-local-git.md), or a [Zip package](deploy-zip.md) [with build automation enabled](deploy-zip.md#enable-build-automation-for-zip-deploy), the deployment engine (Kudu) automatically runs the following post-deployment steps by default: 1. Check if a *Gemfile* exists.
-1. Run `bundle clean`.
+1. Run `bundle clean`.
1. Run `bundle install --path "vendor/bundle"`. 1. Run `bundle package` to package gems into vendor/cache folder.
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview.md
*Azure App Service* is an HTTP-based service for hosting web applications, REST APIs, and mobile back ends. You can develop in your favorite language, be it .NET, .NET Core, Java, Ruby, Node.js, PHP, or Python. Applications run and scale with ease on both Windows and [Linux](#app-service-on-linux)-based environments.
-App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
+App Service not only adds the power of Microsoft Azure to your application, such as security, load balancing, autoscaling, and automated management. You can also take advantage of its DevOps capabilities, such as continuous deployment from Azure DevOps, GitHub, Docker Hub, and other sources, package management, staging environments, custom domain, and TLS/SSL certificates.
-With App Service, you pay for the Azure compute resources you use. The compute resources you use are determined by the _App Service plan_ that you run your apps on. For more information, see [Azure App Service plans overview](overview-hosting-plans.md).
+With App Service, you pay for the Azure compute resources you use. The compute resources you use are determined by the *App Service plan* that you run your apps on. For more information, see [Azure App Service plans overview](overview-hosting-plans.md).
## Why use App Service?
App Service can also host web apps natively on Linux for supported application s
### Built-in languages and frameworks
-App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
+App Service on Linux supports a number of language specific built-in images. Just deploy your code. Supported languages include: Node.js, Java (JRE 8 & JRE 11), PHP, Python, .NET Core, and Ruby. Run [`az webapp list-runtimes --os linux`](/cli/azure/webapp#az_webapp_list_runtimes) to view the latest languages and supported versions. If the runtime your application requires is not supported in the built-in images, you can deploy it with a custom container.
-Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
+Outdated runtimes are periodically removed from the Web Apps Create and Configuration blades in the Portal. These runtimes are hidden from the Portal when they are deprecated by the maintaining organization or found to have significant vulnerabilities. These options are hidden to guide customers to the latest runtimes where they will be the most successful.
When an outdated runtime is hidden from the Portal, any of your existing sites using that version will continue to run. If a runtime is fully removed from the App Service platform, your Azure subscription owner(s) will receive an email notice before the removal.
If you need to create another web app with an outdated runtime version that is n
> Linux and Windows App Service plans can now share resource groups. This limitation has been lifted from the platform and existing resource groups have been updated to support this. > -- App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier. -- The Azure portal shows only features that currently work for Linux apps. As features are enabled, they're activated on the portal.-- When deployed to built-in images, your code and content are allocated a storage volume for web content, backed by Azure Storage. The disk latency of this volume is higher and more variable than the latency of the container filesystem. Apps that require heavy read-only access to content files may benefit from the custom container option, which places files in the container filesystem instead of on the content volume.
+* App Service on Linux is not supported on [Shared](https://azure.microsoft.com/pricing/details/app-service/plans/) pricing tier.
+* The Azure portal shows only features that currently work for Linux apps. As features are enabled, they're activated on the portal.
+* When deployed to built-in images, your code and content are allocated a storage volume for web content, backed by Azure Storage. The disk latency of this volume is higher and more variable than the latency of the container filesystem. Apps that require heavy read-only access to content files may benefit from the custom container option, which places files in the container filesystem instead of on the content volume.
## Next steps
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
az group create --name myResourceGroup --location eastus
[!INCLUDE [app-service-arc-get-custom-location](../../includes/app-service-arc-get-custom-location.md)] - ## 3. Create an app
-The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
+The following example creates a Node.js app. Replace `<app-name>` with a name that's unique within your cluster (valid characters are `a-z`, `0-9`, and `-`). To see all supported runtimes, run [`az webapp list-runtimes --os linux`](/cli/azure/webapp).
```azurecli-interactive az webapp create \
az webapp deployment source config-zip --resource-group myResourceGroup --name <
> [!NOTE] > To use Log Analytics, you should've previously enabled it when [installing the App Service extension](manage-create-arc-environment.md#install-the-app-service-extension). If you installed the extension without Log Analytics, skip this step.
-Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name. If there's an error when running a query, try again in 10-15 minutes (there may be a delay for Log Analytics to start receiving logs from your application).
+Navigate to the [Log Analytics workspace that's configured with your App Service extension](manage-create-arc-environment.md#install-the-app-service-extension), then click Logs in the left navigation. Run the following sample query to show logs over the past 72 hours. Replace `<app-name>` with your web app name. If there's an error when running a query, try again in 10-15 minutes (there may be a delay for Log Analytics to start receiving logs from your application).
```kusto let StartTime = ago(72h);
AppServiceConsoleLogs_CL
| where AppName_s =~ "<app-name>" ```
-The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `AppServiceConsoleLogs_CL`.
+The application logs for all the apps hosted in your Kubernetes cluster are logged to the Log Analytics workspace in the custom log table named `AppServiceConsoleLogs_CL`.
**Log_s** contains application logs for a given App Service and **AppName_s** contains the App Service app name. In addition to logs you write via your application code, the Log_s column also contains logs on container startup, shutdown, and Function Apps.
app-service Quickstart Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-ruby.md
## Download the sample
-1. In a terminal window, clone the sample application to your local machine, and navigate to the directory containing the sample code.
+1. In a terminal window, clone the sample application to your local machine, and navigate to the directory containing the sample code.
```bash git clone https://github.com/Azure-Samples/ruby-docs-hello-world
```bash git branch -m main ```
-
+ > [!TIP] > The branch name change isn't required by App Service. However, since many repositories are changing their default branch to `main`, this tutorial also shows you how to deploy a repository from `main`. For more information, see [Change deployment branch](deploy-local-git.md#change-deployment-branch).
## Create a web app
-1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
+1. Create a [web app](overview.md#app-service-on-linux) in the `myAppServicePlan` App Service plan.
- In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6`. To see all supported runtimes, run [`az webapp list-runtimes --linux`](/cli/azure/webapp).
+ In the Cloud Shell, you can use the [`az webapp create`](/cli/azure/webapp) command. In the following example, replace `<app-name>` with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `RUBY|2.6`. To see all supported runtimes, run [`az webapp list-runtimes --os linux`](/cli/azure/webapp).
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime 'RUBY|2.6' --deployment-local-git
&lt; JSON data removed for brevity. &gt; } </pre>
-
+ You've created an empty new web app, with git deployment enabled. > [!NOTE]
## Deploy your application <pre> remote: Using turbolinks 5.2.0
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
+
+ Title: 'Tutorial: JavaScript connect to Azure services securely with Key Vault'
+description: Learn how to secure connectivity to back-end Azure services that don't support managed identity natively from a JavaScript web app
+ms.devlang: javascript, azurecli
+ Last updated : 10/26/2021+++++
+# Tutorial: Secure Cognitive Service connection from JavaScript App Service using Key Vault
+++
+## Configure JavaScript app
+
+Clone the sample repository locally and deploy the sample application to App Service. Replace *\<app-name>* with a unique name.
+
+```azurecli-interactive
+# Clone and prepare sample application
+git clone https://github.com/Azure-Samples/app-service-language-detector.git
+cd app-service-language-detector/javascript
+zip default.zip *.*
+
+# Save app name as variable for convenience
+appName=<app-name>
+
+az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region --is-linux
+az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node|14-lts"
+az webapp config appsettings set --resource-group $groupName --name $appName --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true
+az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip
+```
+
+The preceding commands:
+* Create a linux app service plan
+* Create a web app for Node.js 14 LTS
+* Configure the web app to install the npm packages on deployment
+* Upload the zip file, and install the npm packages
+
+## Configure secrets as app settings
+
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
The sample project contains a simple ASP.NET application that uses a custom font
### Install the font
-In Windows Explorer, navigate to _custom-font-win-container-master/CustomFontSample_, right-click _FrederickatheGreat-Regular.ttf_, and select **Install**.
+In Windows Explorer, navigate to *custom-font-win-container-master/CustomFontSample*, right-click *FrederickatheGreat-Regular.ttf*, and select **Install**.
This font is publicly available from [Google Fonts](https://fonts.google.com/specimen/Fredericka+the+Great).
This font is publicly available from [Google Fonts](https://fonts.google.com/spe
Open the *custom-font-win-container-master/CustomFontSample.sln* file in Visual Studio.
-Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser.
+Type `Ctrl+F5` to run the app without debugging. The app is displayed in your default browser.
:::image type="content" source="media/tutorial-custom-container/local-app-in-browser.png" alt-text="Screenshot showing the app displayed in the default browser.":::
At the end of the file, add the following line and save the file:
RUN ${source:-obj/Docker/publish/InstallFont.ps1} ```
-You can find _InstallFont.ps1_ in the **CustomFontSample** project. It's a simple script that installs the font. You can find a more complex version of the script in the [Script Center](https://gallery.technet.microsoft.com/scriptcenter/fb742f92-e594-4d0c-8b79-27564c575133).
+You can find *InstallFont.ps1* in the **CustomFontSample** project. It's a simple script that installs the font. You can find a more complex version of the script in the [Script Center](https://gallery.technet.microsoft.com/scriptcenter/fb742f92-e594-4d0c-8b79-27564c575133).
> [!NOTE] > To test the Windows container locally, ensure that Docker is started on your local machine.
A terminal window is opened and displays the image deployment progress. Wait for
## Sign in to Azure
-Sign in to the Azure portal at https://portal.azure.com.
+Sign in to the Azure portal at <https://portal.azure.com>.
## Create a web app
The streamed logs look like this:
::: zone pivot="container-linux" + Azure App Service uses the Docker container technology to host both built-in images and custom images. To see a list of built-in images, run the Azure CLI command, ['az webapp list-runtimes--linux'](/cli/azure/webapp#az_webapp_list_runtimes). If those images don't satisfy your needs, you can build and deploy a custom image. In this tutorial, you learn how to: > [!div class="checklist"]
-> * Push a custom Docker image to Azure Container Registry
-> * Deploy the custom image to App Service
-> * Configure environment variables
-> * Pull image into App Service using a managed identity
-> * Access diagnostic logs
-> * Enable CI/CD from Azure Container Registry to App Service
-> * Connect to the container using SSH
+>
+> - Push a custom Docker image to Azure Container Registry
+> - Deploy the custom image to App Service
+> - Configure environment variables
+> - Pull image into App Service using a managed identity
+> - Access diagnostic logs
+> - Enable CI/CD from Azure Container Registry to App Service
+> - Connect to the container using SSH
Completing this tutorial incurs a small charge in your Azure account for the container registry and can incur more costs for hosting the container for longer than a month.
cd docker-django-webapp-linux
### Download from GitHub
-Instead of using git clone, you can visit [https://github.com/Azure-Samples/docker-django-webapp-linux](https://github.com/Azure-Samples/docker-django-webapp-linux), select **Clone**, and then select **Download ZIP**.
+Instead of using git clone, you can visit [https://github.com/Azure-Samples/docker-django-webapp-linux](https://github.com/Azure-Samples/docker-django-webapp-linux), select **Clone**, and then select **Download ZIP**.
-Unpack the ZIP file into a folder named *docker-django-webapp-linux*.
+Unpack the ZIP file into a folder named *docker-django-webapp-linux*.
Then, open a terminal window in the*docker-django-webapp-linux* folder. ## (Optional) Examine the Docker file
-The file in the sample named _Dockerfile_ that describes the docker image and contains configuration instructions:
+The file in the sample named *Dockerfile* that describes the docker image and contains configuration instructions:
```Dockerfile FROM tiangolo/uwsgi-nginx-flask:python3.6
ENV SSH_PASSWD "root:Docker!"
RUN apt-get update \ && apt-get install -y --no-install-recommends dialog \ && apt-get update \
- && apt-get install -y --no-install-recommends openssh-server \
- && echo "$SSH_PASSWD" | chpasswd
+ && apt-get install -y --no-install-recommends openssh-server \
+ && echo "$SSH_PASSWD" | chpasswd
COPY sshd_config /etc/ssh/ COPY init.sh /usr/local/bin/
-
+
RUN chmod u+x /usr/local/bin/init.sh EXPOSE 8000 2222
EXPOSE 8000 2222
ENTRYPOINT ["init.sh"] ```
-* The first group of commands installs the app's requirements in the environment.
-* The second group of commands create an [SSH](https://www.ssh.com/ssh/protocol/) server for secure communication between the container and the host.
-* The last line, `ENTRYPOINT ["init.sh"]`, invokes `init.sh` to start the SSH service and Python server.
+- The first group of commands installs the app's requirements in the environment.
+- The second group of commands create an [SSH](https://www.ssh.com/ssh/protocol/) server for secure communication between the container and the host.
+- The last line, `ENTRYPOINT ["init.sh"]`, invokes `init.sh` to start the SSH service and Python server.
## Build and test the image locally > [!NOTE] > Docker Hub has [quotas on the number of anonymous pulls per IP and the number of authenticated pulls per free user (see **Data transfer**)](https://www.docker.com/pricing). If you notice your pulls from Docker Hub are being limited, try `docker login` if you're not already logged in.
->
+>
1. Run the following command to build the image: ```bash docker build --tag appsvc-tutorial-custom-image . ```
-
+ 1. Test that the build works by running the Docker container locally: ```bash docker run -it -p 8000:8000 appsvc-tutorial-custom-image ```
-
+ This [`docker run`](https://docs.docker.com/engine/reference/commandline/run/) command specifies the port with the `-p` argument followed by the name of the image. `-it` lets you stop it with `Ctrl+C`.
-
+ > [!TIP] > If you're running on Windows and see the error, *standard_init_linux.go:211: exec user process caused "no such file or directory"*, the *init.sh* file contains CR-LF line endings instead of the expected LF endings. This error happens if you used git to clone the sample repository but omitted the `--config core.autocrlf=input` parameter. In this case, clone the repository again with the `--config`` argument. You might also see the error if you edited *init.sh* and saved it with CRLF endings. In this case, save the file again with LF endings only.
In this section, you push the image to Azure Container Registry from which App S
```azurecli-interactive az acr credential show --resource-group myResourceGroup --name <registry-name> ```
-
+ The JSON output of this command provides two passwords along with the registry's user name.
-
+ 1. Use the `docker login` command to sign in to the container registry: ```bash docker login <registry-name>.azurecr.io --username <registry-username> ```
-
+ Replace `<registry-name>` and `<registry-username>` with values from the previous steps. When prompted, type in one of the passwords from the previous step. You use the same registry name in all the remaining steps of this section.
In this section, you push the image to Azure Container Registry from which App S
```bash docker tag appsvc-tutorial-custom-image <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest
- ```
+ ```
1. Use the `docker push` command to push the image to the registry:
In this section, you push the image to Azure Container Registry from which App S
```azurecli-interactive az acr repository list -n <registry-name> ```
-
- The output should show the name of your image.
+ The output should show the name of your image.
## Configure App Service to deploy the image from the registry
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --deployment-container-image-name <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest ```
-
+ Replace `<app-name>` with a name for the web app, which must be unique across all of Azure. Also replace `<registry-name>` with the name of your registry from the previous section.
-1. Use [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) to set the `WEBSITES_PORT` environment variable as expected by the app code:
+1. Use [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) to set the `WEBSITES_PORT` environment variable as expected by the app code:
```azurecli-interactive az webapp config appsettings set --resource-group myResourceGroup --name <app-name> --settings WEBSITES_PORT=8000 ``` Replace `<app-name>` with the name you used in the previous step.
-
+ For more information on this environment variable, see the [readme in the sample's GitHub repository](https://github.com/Azure-Samples/docker-django-webapp-linux). 1. Enable [the system-assigned managed identity](./overview-managed-identity.md) for the web app by using the [`az webapp identity assign`](/cli/azure/webapp/identity#az_webapp_identity-assign) command:
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az account show --query id --output tsv
- ```
+ ```
1. Grant the managed identity permission to access the container registry:
To deploy a container to Azure App Service, you first create a web app on App Se
```azurecli-interactive az resource update --ids /subscriptions/<subscription-id>/resourceGroups/myResourceGroup/providers/Microsoft.Web/sites/<app-name>/config/web --set properties.acrUseManagedIdentityCreds=True ```
-
+ Replace the following values: - `<subscription-id>` with the subscription ID retrieved from the `az account show` command. - `<app-name>` with the name of your web app.
You can complete these steps once the image is pushed to the container registry
```azurecli-interactive az webapp config container set --name <app-name> --resource-group myResourceGroup --docker-custom-image-name <registry-name>.azurecr.io/appsvc-tutorial-custom-image:latest --docker-registry-server-url https://<registry-name>.azurecr.io ```
-
- Replace `<app-name>` with the name of your web app and replace `<registry-name>` in two places with the name of your registry.
+
+ Replace `<app-name>` with the name of your web app and replace `<registry-name>` in two places with the name of your registry.
- When using a registry other than Docker Hub (as this example shows), `--docker-registry-server-url` must be formatted as `https://` followed by the fully qualified domain name of the registry. - The message, "No credential was provided to access Azure Container Registry. Trying to look up..." indicates that Azure is using the app's managed identity to authenticate with the container registry rather than asking for a username and password.
While you're waiting for the App Service to pull in the image, it's helpful to s
```azurecli-interactive az webapp log config --name <app-name> --resource-group myResourceGroup --docker-container-logging filesystem ```
-
+ 1. Enable the log stream: ```azurecli-interactive az webapp log tail --name <app-name> --resource-group myResourceGroup ```
-
+ If you don't see console logs immediately, check again in 30 seconds. You can also inspect the log files from the browser at `https://<app-name>.scm.azurewebsites.net/api/logs/docker`.
In this section, you make a change to the web app code, rebuild the image, and t
</div> </nav> ```
-
+ 1. Save your changes. 1. Change to the *docker-django-webapp-linux* folder and rebuild the image:
COPY sshd_config /etc/ssh/
EXPOSE 8000 2222 ```
-Port 2222 is an internal port accessible only by containers within the bridge network of a private virtual network.
+Port 2222 is an internal port accessible only by containers within the bridge network of a private virtual network.
Finally, the entry script, *init.sh*, starts the SSH server.
service ssh start
1. When you sign in, you're redirected to an informational page for the web app. Select **SSH** at the top of the page to open the shell and use commands. For example, you can examine the processes running within it using the `top` command.
-
+ ## Clean up resources The resources you created in this article might incur ongoing costs. To clean up the resources, you only need to delete the resource group that contains them:
What you learned:
::: zone pivot="container-windows" > [!div class="checklist"]
-> * Deploy a custom image to a private container registry
-> * Deploy and the custom image in App Service
-> * Update and redeploy the image
-> * Access diagnostic logs
-> * Connect to the container using SSH
+>
+> - Deploy a custom image to a private container registry
+> - Deploy and the custom image in App Service
+> - Update and redeploy the image
+> - Access diagnostic logs
+> - Connect to the container using SSH
::: zone-end ::: zone pivot="container-linux" > [!div class="checklist"]
-> * Push a custom Docker image to Azure Container Registry
-> * Deploy the custom image to App Service
-> * Configure environment variables
-> * Pull image into App Service using a managed identity
-> * Access diagnostic logs
-> * Enable CI/CD from Azure Container Registry to App Service
-> * Connect to the container using SSH
+>
+> - Push a custom Docker image to Azure Container Registry
+> - Deploy the custom image to App Service
+> - Configure environment variables
+> - Pull image into App Service using a managed identity
+> - Access diagnostic logs
+> - Enable CI/CD from Azure Container Registry to App Service
+> - Connect to the container using SSH
::: zone-end - In the next tutorial, you learn how to map a custom DNS name to your app. > [!div class="nextstepaction"]
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
+
+ Title: Configure Azure Monitor alerts for Application Gateway
+description: Learn how to use ARM templates to configure Azure Monitor alerts for Application Gateway
++++ Last updated : 03/03/2022++
+# Configure Azure Monitor alerts for Application Gateway
++
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. For more information about Azure Monitor Alerts for Application Gateway, see [Monitoring Azure Application Gateway](monitor-application-gateway.md#alerts).
+
+## Configure alerts using ARM templates
+
+You can use ARM templates to quickly configure important alerts for Application Gateway. Before you begin, consider the following details:
+
+- Azure Monitor alert rules are charged based on the type and number of signals it monitors. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before deploying for pricing information. Or you can see the estimated cost in the portal after deployment:
+ :::image type="content" source="media/configure-alerts-with-templates/alert-pricing.png" alt-text="Image showing application gateway pricing details":::
+- You need to create an Azure Monitor action group in advance and then use the Resource ID for as many alerts as you need. Azure Monitor alerts use this action group to notify users that an alert has been triggered. For more information, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).
+>[!TIP]
+> You can manually form the ResourceID for your Action Group by following these steps.
+> 1. Select Azure Monitor in your Azure portal.
+> 1. Open the Alerts page and select Action Groups.
+> 1. Select the action group to view its details.
+> 1. Use the Resource Group Name, Action Group Name and Subscription Info here to form the ResourceID for the action group as shown here: <br>
+> `/subscriptions/<subscription-id-from-your-account>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>`
+- The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [Understand how metric alerts work in Azure Monitor](../azure-monitor/alerts/alerts-metric-overview.md) for more information.
+- The templates for metric-based alerts use the **Dynamic threshold** value with [High sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#what-does-sensitivity-setting-in-dynamic-thresholds-mean). You can choose to adjust these settings based on your needs.
+
+## ARM templates
+
+The following ARM templates are available to configure Azure Monitor alerts for Application Gateway.
+
+### Alert for Backend Response Status as 5xx
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for average Unhealthy Host Count
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for Backend Last Byte Response Time
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json)
+
+This notification is based on Metrics signal.
+
+### Alert for Key Vault integration issues
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json)
+
+This notification is based on its Azure Advisor recommendation.
++
+## Next steps
+
+<!-- Add additional links. You can change the wording of these and add more if useful. -->
+
+- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.
+
+- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
application-gateway High Traffic Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/high-traffic-support.md
Check your Compute Unit metric for the past one month. Compute unit metric is a
To get notified of any traffic or utilization anomalies, you can set up alerts on certain metrics. See [metrics documentation](./application-gateway-metrics.md) for the complete list of metrics offered by Application Gateway. See [visualize metrics](./application-gateway-metrics.md#metrics-visualization) in the Azure portal and the [Azure monitor documentation](../azure-monitor/alerts/alerts-metric.md) on how to set alerts for metrics.
+To configure alerts using ARM templates, see [Configure Azure Monitor alerts for Application Gateway](configure-alerts-with-templates.md).
+ ## Alerts for Application Gateway v1 SKU (Standard/WAF) ### Alert if average CPU utilization crosses 80%
automation Delete Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/delete-account.md
To recover an Automation account, ensure that the following conditions are met:
- Before you attempt to recover a deleted Automation account, ensure that resource group for that account exists. > [!NOTE]
-> You can't recover your Automation account if the resource group is deleted.
+> If the resource group of the Automation account is deleted, to recover, you must recreate the resource group with the same name. After a few hours, the Automation account is repopulated in the list of deleted accounts. Then you can restore the account.
### Recover a deleted Automation account
azure-arc Agent Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-overview.md
Title: Overview of the Azure Connected Machine agent description: This article provides a detailed overview of the Azure Arc-enabled servers agent available, which supports monitoring virtual machines hosted in hybrid environments. Previously updated : 03/01/2022 Last updated : 03/03/2022
The following versions of the Windows and Linux operating system are officially
* SUSE Linux Enterprise Server (SLES) 12 and 15 (x64) * Red Hat Enterprise Linux (RHEL) 7 and 8 (x64) * Amazon Linux 2 (x64)
-* Oracle Linux 7 (x64)
+* Oracle Linux 7 and 8 (x64)
> [!WARNING] > The Linux hostname or Windows computer name cannot use one of the reserved words or trademarks in the name, otherwise attempting to register the connected machine with Azure will fail. For a list of reserved words, see [Resolve reserved resource name errors](../../azure-resource-manager/templates/error-reserved-resource-name.md).
Connecting machines in your hybrid environment directly with Azure can be accomp
| At scale | [Connect machines with a Configuration Manager custom task sequence](onboard-configuration-manager-custom-task.md) | At scale | [Connect machines from Automation Update Management](onboard-update-management-machines.md) to create a service principal that installs and configures the agent for multiple machines managed with Azure Automation Update Management to connect machines non-interactively. | --- > [!IMPORTANT] > The Connected Machine agent cannot be installed on an Azure Windows virtual machine. If you attempt to, the installation detects this and rolls back.
The Connected Machine agent for Windows can be installed by using one of the fol
* Manually by running the Windows Installer package `AzureConnectedMachineAgent.msi` from the Command shell. * From a PowerShell session using a scripted method.
-Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
After installing the Connected Machine agent for Windows, the following system-wide configuration changes are applied.
After installing the Connected Machine agent for Windows, the following system-w
|GCArcService |Guest configuration Arc Service |gc_service |Monitors the desired state configuration of the machine.| |ExtensionService |Guest configuration Extension Service | gc_service |Installs the required extensions targeting the machine.|
+* The following virtual service account is created during agent installation.
+
+ | Virtual Account | Description |
+ |||
+ | NT SERVICE\\himds | Unprivileged account used to run the Hybrid Instance Metadata Service. |
+
+ > [!TIP]
+ > This account requires the "Log on as a service" right. This right is automatically granted during agent installation, but if your organization configures user rights assignments with Group Policy, you may need to adjust your Group Policy Object to grant the right to "NT SERVICE\\himds" or "NT SERVICE\\ALL SERVICES" to allow the agent to function.
+
+* The following local security group is created during agent installation.
+
+ | Security group name | Description |
+ ||-|
+ | Hybrid agent extension applications | Members of this security group can request Azure Active Directory tokens for the system-assigned managed identity |
+ * The following environmental variables are created during agent installation. |Name |Default value |Description |
After installing the Connected Machine agent for Windows, the following system-w
The Connected Machine agent for Linux is provided in the preferred package format for the distribution (.RPM or .DEB) that's hosted in the Microsoft [package repository](https://packages.microsoft.com/). The agent is installed and configured with the shell script bundle [Install_linux_azcmagent.sh](https://aka.ms/azcmagent).
-Installing, updating, and removing the Connected Machine agent will not require you to restart your server.
+Installing, upgrading, or removing the Connected Machine agent will not require you to restart your server.
After installing the Connected Machine agent for Linux, the following system-wide configuration changes are applied.
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
This page is updated monthly, so revisit it regularly. If you're looking for ite
- Azure Arc network endpoints are now required, onboarding will abort if they are not accessible - New `--skip-network-check` flag to override the new network check behavior - [Proxy bypass](manage-agent.md#proxy-bypass-for-private-endpoints) is now available for customers using private endpoints. This allows you to send Azure Active Directory and Azure Resource Manager traffic through a proxy server, but skip the proxy server for traffic that should stay on the local network to reach private endpoints.
+- Oracle Linux 8 is now supported
### Fixed
azure-arc Manage Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-agent.md
To clear a configuration property's value, run the following command:
The Azure Connected Machine agent is updated regularly to address bug fixes, stability enhancements, and new functionality. [Azure Advisor](../../advisor/advisor-overview.md) identifies resources that are not using the latest version of machine agent and recommends that you upgrade to the latest version. It will notify you when you select the Azure Arc-enabled server by presenting a banner on the **Overview** page or when you access Advisor through the Azure portal.
-The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, and uninstalling the Azure Connected Machine Agent will not require you to restart your server.
+The Azure Connected Machine agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on your requirements. Installing, upgrading, or uninstalling the Azure Connected Machine Agent will not require you to restart your server.
The following table describes the methods supported to perform the agent upgrade.
azure-arc Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-overview.md
To manage the Azure Connected Machine agent (azcmagent) on Windows, your user ac
The Azure Connected Machine agent is composed of three services, which run on your machine.
-* The Hybrid Instance Metadata Service (himds) service is responsible for all core functionality of Arc. This includes sending heartbeats to Azure, exposing a local instance metadata service for other apps to learn about the machineΓÇÖs Azure resource ID, and retrieve Azure AD tokens to authenticate to other Azure services. This service runs as an unprivileged virtual service account on Windows, and as the **himds** user on Linux.
+* The Hybrid Instance Metadata Service (himds) service is responsible for all core functionality of Arc. This includes sending heartbeats to Azure, exposing a local instance metadata service for other apps to learn about the machineΓÇÖs Azure resource ID, and retrieve Azure AD tokens to authenticate to other Azure services. This service runs as an unprivileged virtual service account (NT SERVICE\\himds) on Windows, and as the **himds** user on Linux. The virtual service account requires the Log on as a Service right on Windows.
* The Guest Configuration service (GCService) is responsible for evaluating Azure Policy on the machine.
-* The Guest Configuration Extension service (ExtensionService) is responsible for installing, updating, and deleting extensions (agents, scripts, or other software) on the machine.
+* The Guest Configuration Extension service (ExtensionService) is responsible for installing, upgrading, and deleting extensions (agents, scripts, or other software) on the machine.
The guest configuration and extension services run as Local System on Windows, and as root on Linux.
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
recommendations: false Previously updated : 02/26/2022 Last updated : 03/02/2022 # Compare Azure Government and global Azure
This section outlines variations and considerations when using Identity services
### [Azure Active Directory Premium P1 and P2](../active-directory/index.yml)
+For feature variations and limitations, see [Cloud feature availability](../active-directory/authentication/feature-availability.md).
+ The following features have known limitations in Azure Government: - Limitations with B2B Collaboration in supported Azure US Government tenants: - For more information about B2B collaboration limitations in Azure Government and to find out if B2B collaboration is available in your Azure Government tenant, see [Azure AD B2B in government and national clouds](../active-directory/external-identities/b2b-government-national-clouds.md). - B2B collaboration via Power BI is not supported. When you invite a guest user from within Power BI, the B2B flow is not used and the guest user won't appear in the tenant's user list. If a guest user is invited through other means, they'll appear in the Power BI user list, but any sharing request to the user will fail and display a 403 Forbidden error. -- Limitations with multifactor authentication:
- - Hardware OATH tokens are not available in Azure Government.
- - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multifactor authentication should and should not be required based off the user's current IP address.
--- Limitations with Azure AD join:
- - Enterprise state roaming for Windows 10 devices is not available
+- Limitations with multi-factor authentication:
+ - Trusted IPs are not supported in Azure Government. Instead, use Conditional Access policies with named locations to establish when multi-factor authentication should and should not be required based off the user's current IP address.
## Management and governance
azure-government Connect With Azure Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/connect-with-azure-pipelines.md
Title: Deploy an app in Azure Government with Azure Pipelines
-description: Information on configuring continuous deployment to your applications hosted with a subscription in Azure Government by connecting from Azure Pipelines.
+description: Configure continuous deployment to your applications hosted in Azure Government by connecting from Azure Pipelines.
Previously updated : 11/02/2021 Last updated : 03/02/2022 # Deploy an app in Azure Government with Azure Pipelines
-This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you will build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
-
-> [!NOTE]
-> For special considerations when deploying apps to Azure Government, see **[Deploy apps to Azure Government Cloud](/azure/devops/pipelines/library/government-cloud).**
+This article helps you use Azure Pipelines to set up continuous integration (CI) and continuous deployment (CD) of your web app running in Azure Government. CI/CD automates the build of your code from a repo along with the deployment (release) of the built code artifacts to a service or set of services in Azure Government. In this tutorial, you'll build a web app and deploy it to an Azure Governments app service. This build and release process is triggered by a change to a code file in the repo.
[Azure Pipelines](/azure/devops/pipelines/get-started/what-is-azure-pipelines) is used by teams to configure continuous deployment for applications hosted in Azure subscriptions. We can use this service for applications running in Azure Government by defining [service connections](/azure/devops/pipelines/library/service-endpoints) for Azure Government.
This article helps you use Azure Pipelines to set up continuous integration (CI)
## Prerequisites
-Before starting this tutorial, you must have the following:
+Before starting this tutorial, you must complete the following prerequisites:
+ [Create an organization in Azure DevOps](/azure/devops/organizations/accounts/create-organization) + [Create and add a project to the Azure DevOps organization](/azure/devops/organizations/projects/create-project?;bc=%2fazure%2fdevops%2fuser-guide%2fbreadcrumb%2ftoc.json&tabs=new-nav&toc=%2fazure%2fdevops%2fuser-guide%2ftoc.json) + Install and set up [Azure PowerShell](/powershell/azure/install-az-ps)
-If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/overview/clouds/government/) before you begin.
+If you don't have an active Azure Government subscription, create a [free account](https://azure.microsoft.com/global-infrastructure/government/request/) before you begin.
## Create Azure Government app service
Follow through one of the quickstarts below to set up a Build for your specific
1. Download or copy and paste the [service principal creation](https://github.com/yujhongmicrosoft/spncreationn/blob/master/spncreation.ps1) PowerShell script into an IDE or editor.
+ > [!NOTE]
+ > This script will be updated to use the Azure Az PowerShell module instead of the deprecated AzureRM PowerShell module.
+ 2. Open up the file and navigate to the `param` parameter. Replace the `$environmentName` variable with
-AzureUSGovernment." This sets the service principal to be created in Azure Government.
+AzureUSGovernment." This action sets the service principal to be created in Azure Government.
3. Open your PowerShell window and run the following command. This command sets a policy that enables running local files. `Set-ExecutionPolicy -Scope Process -ExecutionPolicy Bypass`
- When you are asked whether you want to change the execution policy, enter "A" (for "Yes to All").
+ When you're asked whether you want to change the execution policy, enter "A" (for "Yes to All").
4. Navigate to the directory that has the edited script above.
AzureUSGovernment." This sets the service principal to be created in Azure Gover
7. When prompted for the "password" parameter, enter your desired password.
-8. After providing your Azure Government subscription credentials, you should see the following:
+8. After providing your Azure Government subscription credentials, you should see the following message:
> [!NOTE] > The Environment variable should be `AzureUSGovernment`.
-9. After the script has run, you should see your service connection values. Copy these values as we will need them when setting up our endpoint.
+9. After the script has run, you should see your service connection values. Copy these values as we'll need them when setting up our endpoint.
![ps4](./media/documentation-government-vsts-img11.png)
Follow [Deploy a web app to Azure App Services](/azure/devops/pipelines/apps/cd/
**Do I need a build agent?** <br/> You need at least one [agent](/azure/devops/pipelines/agents/agents) to run your deployments. By default, the build and deployment processes are configured to use the [hosted agents](/azure/devops/pipelines/agents/agents#microsoft-hosted-agents). Configuring a private agent would limit data sharing outside of Azure Government.
-**I use Team Foundation Server on-premises. Can I configure CD on my server to target Azure Government?** <br/>
-Currently, Team Foundation Server cannot be used to deploy to an Azure Government Cloud.
+**I use Team Foundation Server on premises. Can I configure CD on my server to target Azure Government?** <br/>
+Currently, Team Foundation Server can't be used to deploy to an Azure Government Cloud.
## Next steps -- Subscribe to the [Azure Government blog](https://blogs.msdn.microsoft.com/azuregov/)
+- Subscribe to the [Azure Government blog](https://devblogs.microsoft.com/azuregov/)
- Get help on Stack Overflow by using the "[azure-gov](https://stackoverflow.com/questions/tagged/azure-gov)" tag
azure-government Documentation Government Overview Jps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-overview-jps.md
recommendations: false Previously updated : 03/01/2022 Last updated : 03/02/2022 # Public safety and justice in Azure Government
Microsoft treats Criminal Justice Information Services (CJIS) compliance as a co
The [Criminal Justice Information Services](https://www.fbi.gov/services/cjis) (CJIS) Division of the US Federal Bureau of Investigation (FBI) gives state, local, and federal law enforcement and criminal justice agencies access to criminal justice information (CJI), for example, fingerprint records and criminal histories. Law enforcement and other government agencies in the United States must ensure that their use of cloud services for the transmission, storage, or processing of CJI complies with the [CJIS Security Policy](https://www.fbi.gov/services/cjis/cjis-security-policy-resource-center/view), which establishes minimum security requirements and controls to safeguard CJI.
-The CJIS Security Policy integrates presidential and FBI directives, federal laws, and the criminal justice community's Advisory Policy Board decisions, along with guidance from the National Institute of Standards and Technology (NIST). The CJIS Security Policy is updated periodically to reflect evolving security requirements.
+### Azure Government and CJIS Security Policy
-The CJIS Security Policy defines 13 areas that private contractors such as cloud service providers must evaluate to determine if their use of cloud services can be consistent with CJIS requirements. These areas correspond closely to control families in [NIST SP 800-53](https://csrc.nist.gov/Projects/risk-management/sp800-53-controls/release-search#!/800-53), which is also the basis for the US Federal Risk and Authorization Management Program (FedRAMP). The FBI CJIS Information Security Officer (ISO) Program Office has published a [security control mapping of CJIS Security Policy requirements to NIST SP 800-53](https://www.fbi.gov/file-repository/csp-v5_5-to-nist-controls-mapping-1.pdf/view). The corresponding NIST SP 800-53 controls are listed for each CJIS Security Policy section.
+Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
-All private contractors who process CJI must sign the CJIS Security Addendum, a uniform agreement approved by the US Attorney General that helps ensure the security and confidentiality of CJI required by the Security Policy. It commits the contractor to maintaining a security program consistent with federal and state laws, regulations, and standards. The addendum also limits the use of CJI to the purposes for which a government agency provided it.
-
-### Azure and CJIS Security Policy
-
-Microsoft will sign the CJIS Security Addendum in states with CJIS Information Agreements. These agreements tell state law enforcement authorities responsible for compliance with CJIS Security Policy how Microsoft's cloud security controls help protect the full lifecycle of data and ensure appropriate background screening of operating personnel with potential access to CJI.
-
-Microsoft has agreements signed with nearly all 50 states and the District of Columbia except for the following states: Delaware, Louisiana, Maryland, New Mexico, Ohio, and South Dakota. Microsoft continues to work with state governments to enter into CJIS Information Agreements.
-
-Microsoft's commitment to meeting the applicable CJIS regulatory controls help criminal justice organizations be compliant with the CJIS Security Policy when implementing cloud-based solutions. Microsoft can accommodate customers subject to the CJIS Security Policy requirements in:
--- [Azure Government](./documentation-government-welcome.md)-- [Dynamics 365 US Government](/power-platform/admin/microsoft-dynamics-365-government#certifications-and-accreditations)-- [Office 365 GCC](/office365/servicedescriptions/office-365-platform-service-description/office-365-us-government/gcc#us-government-community-compliance)-
-Microsoft has assessed the operational policies and procedures of Microsoft Azure Government, Dynamics 365 US Government, and Office 365 GCC, and will attest to their ability in the applicable services agreements to meet FBI requirements. For more information about Azure support for CJIS, see [Azure CJIS compliance offering](/azure/compliance/offerings/offering-cjis).
-
-The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. These technologies can help you establish sole control over CJI that you're responsible for.
+The remainder of this article discusses technologies that you can use to safeguard CJI stored or processed in Azure cloud services. **These technologies can help you establish sole control over CJI that you're responsible for.**
> [!NOTE] > You are wholly responsible for ensuring your own compliance with all applicable laws and regulations. Information provided in this article does not constitute legal advice, and you should consult your legal advisor for any questions regarding regulatory compliance.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
We strongly recommended to update to generally available versions listed as foll
| August 2021 | Fixed issue allowing Azure Monitor Metrics as the only destination | 1.1.2.0 | 1.10.9.0<sup>Hotfix</sup> | | September 2021 | <ul><li>Fixed issue causing data loss on restarting the agent</li><li>Fixed issue for Arc Windows servers</li></ul> | 1.1.3.2<sup>Hotfix</sup> | 1.12.2.0 <sup>1</sup> | | December 2021 | <ul><li>Fixed issues impacting Linux Arc-enabled servers</li><li>'Heartbeat' table > 'Category' column reports "Azure Monitor Agent" in Log Analytics for Windows</li></ul> | 1.1.4.0 | 1.14.7.0<sup>2</sup> |
-| January 2021 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
+| January 2022 | <ul><li>Syslog RFC compliance for Linux</li><li>Fixed issue for Linux perf counters not flowing on restart</li><li>Fixed installation failure on Windows Server 2008 R2 SP1</li></ul> | 1.1.5.1<sup>Hotfix</sup> | 1.15.2.0<sup>Hotfix</sup> |
<sup>Hotfix</sup> Do not use AMA Linux versions v1.10.7, v1.15.1 and AMA Windows v1.1.3.1, v1.1.5.0. Please use hotfixed versions listed above. <sup>1</sup> Known issue: No data collected from Linux Arc-enabled servers
The following prerequisites must be met prior to installing the Azure Monitor ag
- [Managed system identity](../../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md) must be enabled on Azure virtual machines. This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-rule-and-association-in-azure-portal). - The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. - The virtual machine must have access to the following HTTPS endpoints:
- - *.ods.opinsights.azure.com
- - *.ingest.monitor.azure.com
- - *.control.monitor.azure.com
+ - global.handler.control.monitor.azure.com
+ - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com)
+ - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
+ (If using private links on the agent, you must also add the [dce endpoints](../essentials/data-collection-endpoint-overview.md#components-of-a-data-collection-endpoint))
+ > [!NOTE] > This article only pertains to agent installation or management. After you install the agent, you must review the next article to [configure data collection rules and associate them with the machines](./data-collection-rule-azure-monitor-agent.md) with agents installed.
azure-monitor Api Custom Events Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-custom-events-metrics.md
dependencies
Normally, the SDK sends data at fixed intervals (typically 30 secs) or whenever buffer is full (typically 500 items). However, in some cases, you might want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
-*C#*
+*.NET*
```csharp telemetry.Flush();
telemetry.flush();
The function is asynchronous for the [server telemetry channel](https://www.nuget.org/packages/Microsoft.ApplicationInsights.WindowsServer.TelemetryChannel/).
-Ideally, flush() method should be used in the shutdown activity of the Application.
+We recommend using the flush() or flushAsync() methods in the shutdown activity of the Application when using the .NET or JS SDK.
+
+For Example:
+
+*JS*
+
+```javascript
+// Immediately send all queued telemetry. By default, it is sent async.
+flush(async?: boolean = true)
+```
## Authenticated users
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis-visualizations.md
The UI supports selecting multiple subscriptions to view resource changes. Use t
:::image type="content" source="./media/change-analysis/multiple-subscriptions-support.png" alt-text="Screenshot of subscription filter that supports selecting multiple subscriptions":::
-## Application Change Analysis in the Diagnose and solve problems tool
+## Diagnose and solve problems tool
Application Change Analysis is: - A standalone detector in the Web App **Diagnose and solve problems** tool. - Aggregated in **Application Crashes** and **Web App Down detectors**.
-From your app service's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered. Enable web app in-guest change tracking with the following instructions:
+From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+
+### Diagnose and solve problems tool for Web App
+
+> [!NOTE]
+> You may not immediately see web app in-guest file changes and configuration changes. Restart your web app and you should be able to view changes within 30 minutes. If not, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
1. Select **Availability and Performance**.
By default, the graph displays changes from within the past 24 hours help with i
:::image type="content" source="./media/change-analysis/change-view.png" alt-text="Screenshot of the change diff view":::
-## Diagnose and Solve Problems tool
-Change Analysis displays as an insight card in a virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
-
-Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
-
+### Diagnose and solve problems tool for Virtual Machines
-## Virtual Machine Diagnose and Solve Problems
+Change Analysis displays as an insight card in a your virtual machine's **Diagnose and solve problems** tool. The insight card displays the number of changes or issues a resource experiences within the past 72 hours.
1. Within your virtual machine, select **Diagnose and solve problems** from the left menu. 1. Go to **Troubleshooting tools**. 1. Scroll to the end of the troubleshooting options and select **Analyze recent changes** to view changes on the virtual machine.
+ :::image type="content" source="./media/change-analysis/vm-dnsp-troubleshootingtools.png" alt-text="Screenshot of the VM Diagnose and Solve Problems":::
+
+ :::image type="content" source="./media/change-analysis/analyze-recent-changes.png" alt-text="Change analyzer in troubleshooting tools":::
+
+### Diagnose and solve problems tool for Azure SQL Database and other resources
+
+You can view Change Analysis data for [multiple Azure resources](./change-analysis.md#supported-resource-types), but we highlight Azure SQL Database below.
+
+1. Within your resource, select **Diagnose and solve problems** from the left menu.
+1. Under **Common problems**, select **View change details** to view the filtered view from Change Analysis standalone UI.
+ :::image type="content" source="./media/change-analysis/diagnose-tool-other-resources.png" alt-text="Screenshot of viewing common problems in Diagnose and Solve Problems tool.":::
## Activity Log change history
Use the [View change history](../essentials/activity-log.md#view-change-history)
1. Once registered, you can view changes from **Azure Resource Graph** immediately from the past 14 days. - Changes from other sources will be available after ~4 hours after subscription is onboard.
+ :::image type="content" source="./media/change-analysis/activity-log-change-history.png" alt-text="Activity Log change history integration":::
## VM Insights integration
azure-monitor Change Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/change-analysis.md
Application Change Analysis service supports resource property level changes in
- Virtual Machine - Virtual machine scale set - App Service-- Azure Kubernetes service
+- Azure Kubernetes Service (AKS)
- Azure Function - Networking resources: - Network Security Group
Unlike Azure Resource Graph, Change Analysis securely queries and computes IP Co
### Changes in web app deployment and configuration (in-guest changes)
-Every 4 hours, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
+Every 30 minutes, Change Analysis captures the deployment and configuration state of an application. For example, it can detect changes in the application environment variables. The tool computes the differences and presents the changes.
Unlike Azure Resource Manager changes, code deployment change information might not be available immediately in the Change Analysis tool. To view the latest changes in Change Analysis, select **Refresh**. :::image type="content" source="./media/change-analysis/scan-changes.png" alt-text="Screenshot of the Scan changes now button":::
-Currently all text-based files under site root **wwwroot** with the following extensions are supported:
+If you don't see changes within 30 minutes, refer to [our troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+
+Currently, all text-based files under site root **wwwroot** with the following extensions are supported:
- *.json - *.xml - *.ini
You'll need to register the `Microsoft.ChangeAnalysis` resource provider with an
- Enter the Web App **Diagnose and Solve Problems** tool, or - Bring up the Change Analysis standalone tab.
-For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#application-change-analysis-in-the-diagnose-and-solve-problems-tool) section.
+For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool-for-web-app) section.
+
+If you don't see changes within 30 minutes, refer to [the troubleshooting guide](./change-analysis-troubleshoot.md#cannot-see-in-guest-changes-for-newly-enabled-web-app).
+ ## Cost Application Change Analysis is a free service. Once enabled, the Change Analysis **Diagnose and solve problems** tool does not:
azure-monitor Mobile Center Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/mobile-center-quickstart.md
To onboard your app, follow the App Center quickstart for each platform your app
## Track events in your app
-After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App Center SDK. Custom events are the only type of App Center telemetry that is exported to Application Insights.
+After your app is onboarded to App Center, it needs to be modified to send custom event telemetry using the App Center SDK.
To send custom events from iOS apps, use the `trackEvent` or `trackEvent:withProperties` methods in the App Center SDK. [Learn more about tracking events from iOS apps.](/mobile-center/sdk/analytics/ios)
To delete the Application Insights resource:
## Next steps > [!div class="nextstepaction"]
-> [Understand how customers are using your app](../app/usage-overview.md)
+> [Understand how customers are using your app](../app/usage-overview.md)
azure-monitor Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md
To receive, store, and explore your monitoring data, include the SDK in your code, and then set up a corresponding Application Insights resource in Azure. The SDK sends data to that resource for further analysis and exploration.
-The Node.js SDK can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the SDK also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
+The Node.js client library can automatically monitor incoming and outgoing HTTP requests, exceptions, and some system metrics. Beginning in version 0.20, the client library also can monitor some common [third-party packages](https://github.com/microsoft/node-diagnostic-channel/tree/master/src/diagnostic-channel-publishers#currently-supported-modules), like MongoDB, MySQL, and Redis. All events related to an incoming HTTP request are correlated for faster troubleshooting.
You can use the TelemetryClient API to manually instrument and monitor additional aspects of your app and system. We describe the TelemetryClient API in more detail later in this article.
Before you begin, make sure that you have an Azure subscription, or [get a new o
1. Sign in to the [Azure portal][portal]. 2. [Create an Application Insights resource](create-new-resource.md)
-### <a name="sdk"></a> Set up the Node.js SDK
+### <a name="sdk"></a> Set up the Node.js client library
Include the SDK in your app, so it can gather data.
Include the SDK in your app, so it can gather data.
![Copy instrumentation key](./media/nodejs/instrumentation-key-001.png)
-2. Add the Node.js SDK library to your app's dependencies via package.json. From the root folder of your app, run:
+2. Add the Node.js client library to your app's dependencies via package.json. From the root folder of your app, run:
```bash npm install applicationinsights --save
appInsights
For a full description of the TelemetryClient API, see [Application Insights API for custom events and metrics](./api-custom-events-metrics.md).
-You can track any request, event, metric, or exception by using the Application Insights Node.js SDK. The following code example demonstrates some of the APIs that you can use:
+You can track any request, event, metric, or exception by using the Application Insights client library for Node.js. The following code example demonstrates some of the APIs that you can use:
```javascript let appInsights = require("applicationinsights");
azure-monitor Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md
# System performance counters in Application Insights
-Windows provides a wide variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters) such as CPU occupancy, memory, disk, and network usage. You can also define your own performance counters. Performance counters collection is supported as long as your application is running under IIS on an on-premises host, or virtual machine to which you have administrative access. Though applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters are collected by Application Insights.
+Windows provides a wide variety of [performance counters](/windows/desktop/perfctrs/about-performance-counters) such as processor, memory, and disk usage statistics. You can also define your own performance counters. Performance counters collection is supported as long as your application is running under IIS on an on-premises host, or virtual machine to which you have administrative access. Though applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters are collected by Application Insights.
## View counters
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
With specific business events, you can chart your users' progress through your s
Events can be logged from the client side of the app: ```JavaScript
- appInsights.trackEvent("ExpandDetailTab", {DetailTab: tabName});
+ appInsights.trackEvent({name: "incrementCount"});
``` Or from the server side:
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-cost-storage.md
The **Data Retention** page allows retention settings of 30, 31, 60, 90, 120, 18
Workspaces with 30 days retention might actually retain data for 31 days. If it's imperative that data be kept for only 30 days, use the Azure Resource Manager to set the retention to 30 days and with the `immediatePurgeDataOn30Days` parameter.
-By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. If the workspace retention is increased to more than 90 days, the retention of these data types is also increased. These data types are also free from data ingestion charges.
+By default, two data types - `Usage` and `AzureActivity` - are retained for a minimum of 90 days at no charge. When you increase the workspace retention to more than 90 days, you also increase the retention of these data types, and you'll be charged for retaining this data beyond the 90-day period. These data types are also free from data ingestion charges.
Data types from workspace-based Application Insights resources (`AppAvailabilityResults`, `AppBrowserTimings`, `AppDependencies`, `AppExceptions`, `AppEvents`, `AppMetrics`, `AppPageViews`, `AppPerformanceCounters`, `AppRequests`, `AppSystemEvents`, and `AppTraces`) are also retained for 90 days at no charge by default. Their retention can be adjusted using the retention by data type functionality.
To facilitate this assessment, the following query can be used to make a recomme
Here is the pricing tier recommendation query: ```kusto
-// Set these parameters before running query
-// Pricing details available at https://azure.microsoft.com/pricing/details/monitor/
+// Set these parameters before running query.
+// For Pay-As-You-Go (per-GB) pricing details, see https://azure.microsoft.com/pricing/details/monitor/.
+// You can see your per-node costs in your Azure usage and charge data. For more information, see https://docs.microsoft.com/en-us/azure/cost-management-billing/understand/download-azure-daily-usage.
let daysToEvaluate = 7; // Enter number of previous days to analyze (reduce if the query is taking too long) let workspaceHasSecurityCenter = false; // Specify if the workspace has Defender for Cloud (formerly known as Azure Security Center) let PerNodePrice = 15.; // Enter your monthly price per monitored nodes
azure-netapp-files Azacsnap Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-get-started.md
na Previously updated : 04/21/2021 Last updated : 03/03/2022
For more information about using GPG, see [The GNU Privacy Handbook](https://www
## Supported scenarios
-The snapshot tools can be used in the following scenarios.
--- Single SID-- Multiple SID-- HSR-- Scale-out-- MDC (Only single tenant supported)-- Single Container-- SUSE Operating System-- RHEL Operating System-- SKU TYPE I-- SKU TYPE II-
-See [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md)
+The snapshot tools can be used in the following [Supported scenarios for HANA Large Instances](../virtual-machines/workloads/sap/hana-supported-scenario.md) and
+[SAP HANA with Azure NetApp Files](../virtual-machines/workloads/sap/hana-vm-operations-netapp.md).
## Snapshot Support Matrix from SAP
azure-portal Azure Portal Video Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-video-series.md
Title: Azure portal how-to video series description: Find video demos for how to work with Azure services in the portal. View and link directly to the latest how-to videos. keywords: Previously updated : 03/16/2021 Last updated : 03/03/2022 # Azure portal how-to video series
-The Azure portal how-to video series showcases how to work with Azure services in the Azure portal. Each week the Azure portal team adds to the video playlist. These interactive demos can help you be more efficient and productive.
+The [Azure portal how-to video series](https://www.youtube.com/playlist?list=PLLasX02E8BPBKgXP4oflOL29TtqTzwhxR) showcases how to work with Azure services in the Azure portal. Each week the Azure portal team adds to the video playlist. These interactive demos can help you be more efficient and productive.
## Featured video
-In this featured video, we show you how to build tabs and alerts in Azure workbooks.
+In this featured video, we show you how to move your resources in Azure between resource groups and locations.
-> [!VIDEO https://www.youtube.com/embed/3XY3lYgrRvA]
+> [!VIDEO https://www.youtube.com/embed/8HVAP4giLdc]
-[How to build tabs and alerts in Azure workbooks](https://www.youtube.com/watch?v=3XY3lYgrRvA)
+[How to move Azure resources](https://www.youtube.com/watch?v=8HVAP4giLdc)
-Catch up on these recent videos you may have missed:
+Catch up on these videos you may have missed:
| [How to easily manage your virtual machine](https://www.youtube.com/watch?v=vQClJHt2ulQ) | [How to use pills to filter in the Azure portal](https://www.youtube.com/watch?v=XyKh_3NxUlM) | [How to get a visualization view of your resources](https://www.youtube.com/watch?v=wudqkkJd5E4) | | | | |
Explore the [Azure portal how-to series](https://www.youtube.com/playlist?list=P
## Next steps Explore hundreds of videos for Azure services in the [video library](https://azure.microsoft.com/resources/videos/index/?tag=microsoft-azure-portal).+
azure-resource-manager Resource Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/resource-dependencies.md
Title: Set resource dependencies in Bicep description: Describes how to specify the order resources are deployed. Previously updated : 02/04/2022 Last updated : 03/02/2022 # Resource dependencies in Bicep
resource otherZone 'Microsoft.Network/dnszones@2018-05-01' = {
} ```
-While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. You can't query which resources were defined in the `dependsOn` element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
+While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. After deployment, the resource doesn't retain deployment dependencies in its properties, so there are no commands or operations that let you see dependencies. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
Even though explicit dependencies are sometimes required, the need for them is rare. In most cases, you can use a symbolic name to imply the dependency between resources. If you find yourself setting explicit dependencies, you should consider if there's a way to remove it.
azure-resource-manager Resource Dependency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/resource-dependency.md
Title: Set deployment order for resources description: Describes how to set one Azure resource as dependent on another resource during deployment. The dependencies ensure resources are deployed in the correct order. Previously updated : 12/21/2020 Last updated : 03/02/2022 # Define the order for deploying resources in ARM templates
The following example shows a network interface that depends on a virtual networ
} ```
-While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. You can't query which resources were defined in the `dependsOn` element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
+While you may be inclined to use `dependsOn` to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, `dependsOn` isn't the right approach. After deployment, the resource doesn't retain deployment dependencies in its properties, so there are no commands or operations that let you see dependencies. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
## Child resources
azure-sql Active Geo Replication Configure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/active-geo-replication-configure-portal.md
Last updated 08/20/2021
This article shows you how to configure [active geo-replication for Azure SQL Database](active-geo-replication-overview.md#active-geo-replication-terminology-and-capabilities) using the [Azure portal](https://portal.azure.com) or Azure CLI and to initiate failover.
-For best practices using auto-failover groups, see [Best practices for Azure SQL Database](auto-failover-group-overview.md#best-practices-for-sql-database) and [Best practices for Azure SQL Managed Instance](auto-failover-group-overview.md#best-practices-for-sql-managed-instance).
+For best practices using auto-failover groups, see [Auto-failover groups with Azure SQL Database](auto-failover-group-sql-db.md) and [Auto-failover groups with Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md).
azure-sql Auto Failover Group Configure Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure-sql-db.md
+
+ Title: Configure an auto-failover group
+
+description: Learn how to configure an auto-failover group for a single or pooled database in Azure SQL Database using the Azure portal and PowerShell.
+++++
+ms.devlang:
+++ Last updated : 03/01/2022
+zone_pivot_groups: azure-sql-deployment-option-single-elastic
+
+# Configure an auto-failover group for Azure SQL Database
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](auto-failover-group-configure-sql-db.md)
+> * [Azure SQL Managed Instance](../managed-instance/auto-failover-group-configure-sql-mi.md)
+
+This topic teaches you how to configure an [auto-failover group](auto-failover-group-sql-db.md) for single and pooled databases in Azure SQL Database by using the Azure portal and Azure PowerShell. For an end-to-end experience, review the [Auto-failover group tutorial](failover-group-add-single-database-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see [Configure auto-failover groups in Azure SQL Managed Instance](../managed-instance/auto-failover-group-configure-sql-mi.md).
++++
+## Prerequisites
+
+Consider the following prerequisites for creating your failover group for a single database:
+
+- The server login and firewall settings for the secondary server must match that of your primary server.
+
+## Create failover group
+
+# [Portal](#tab/azure-portal)
+
+Create your failover group and add your single database to it using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the database you want to add to the failover group.
+1. Select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for single db](./media/auto-failover-group-configure-sql-db/open-sql-db-server.png)
+
+1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
+
+ ![Add new failover group](./media/auto-failover-group-configure-sql-db/sqldb-add-new-failover-group.png)
+
+1. On the **Failover Group** page, enter or select the required values, and then select **Create**.
+
+ - **Databases within the group**: Choose the database you want to add to your failover group. Adding the database to the failover group will automatically start the geo-replication process.
+
+ ![Add SQL Database to failover group](./media/auto-failover-group-configure-sql-db/add-sqldb-to-failover-group.png)
+
+# [PowerShell](#tab/azure-powershell)
+
+Create your failover group and add your database to it using PowerShell.
+
+ ```powershell-interactive
+ $subscriptionId = "<SubscriptionID>"
+ $resourceGroupName = "<Resource-Group-Name>"
+ $location = "<Region>"
+ $adminLogin = "<Admin-Login>"
+ $password = "<Complex-Password>"
+ $serverName = "<Primary-Server-Name>"
+ $databaseName = "<Database-Name>"
+ $drLocation = "<DR-Region>"
+ $drServerName = "<Secondary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Create a secondary server in the failover region
+ Write-host "Creating a secondary server in the failover region..."
+ $drServer = New-AzSqlServer -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -Location $drLocation `
+ -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential `
+ -ArgumentList $adminlogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
+ $drServer
+
+ # Create a failover group between the servers
+ $failovergroup = Write-host "Creating a failover group between the primary and secondary server..."
+ New-AzSqlDatabaseFailoverGroup `
+ ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -PartnerServerName $drServerName `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
+ -GracePeriodWithDataLossHours 2
+ $failovergroup
+
+ # Add the database to the failover group
+ Write-host "Adding the database to the failover group..."
+ Get-AzSqlDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -DatabaseName $databaseName | `
+ Add-AzSqlDatabaseToFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Successfully added the database to the failover group..."
+ ```
+++
+## Test failover
+
+Test failover of your failover group using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Test failover of your failover group using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the database you want to add to the failover group.
+
+ ![Open server for single db](./media/auto-failover-group-configure-sql-db/open-sql-db-server.png)
+
+1. Select **Failover groups** under the **Settings** pane and then choose the failover group you just created.
+
+ ![Select the failover group from the portal](./media/auto-failover-group-configure-sql-db/select-failover-group.png)
+
+1. Review which server is primary and which server is secondary.
+1. Select **Failover** from the task pane to fail over your failover group containing your database.
+1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
+
+ ![Fail over your failover group containing your database](./media/auto-failover-group-configure-sql-db/failover-sql-db.png)
+
+1. Review which server is now primary and which server is secondary. If failover succeeded, the two servers should have swapped roles.
+1. Select **Failover** again to fail the servers back to their original roles.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+Check the role of the secondary replica:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Check role of secondary replica
+ Write-host "Confirming the secondary replica is secondary...."
+ (Get-AzSqlDatabaseFailoverGroup `
+ -FailoverGroupName $failoverGroupName `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName).ReplicationRole
+ ```
+
+Fail over to the secondary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Failover to secondary server
+ Write-host "Failing over failover group to the secondary..."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group to successfully to" $drServerName
+ ```
+
+Revert failover group back to the primary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Revert failover to primary server
+ Write-host "Failing over failover group to the primary...."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group successfully to back to" $serverName
+ ```
+++
+> [!IMPORTANT]
+> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
++++
+## Prerequisites
+
+Consider the following prerequisites for creating your failover group for a pooled database:
+
+- The server login and firewall settings for the secondary server must match that of your primary server.
+
+## Create failover group
+
+Create the failover group for your elastic pool using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Create your failover group and add your elastic pool to it using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the elastic pool you want to add to the failover group.
+1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for elastic pool](./media/auto-failover-group-configure-sql-db/server-for-elastic-pool.png)
+
+1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
+
+ ![Add new failover group](./media/auto-failover-group-configure-sql-db/sqldb-add-new-failover-group.png)
+
+1. On the **Failover Group** page, enter or select the required values, and then select **Create**. Either create a new secondary server, or select an existing secondary server.
+
+1. Select **Databases within the group** then choose the elastic pool you want to add to the failover group. If an elastic pool does not already exist on the secondary server, a warning appears prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
+
+ ![Add elastic pool to failover group](./media/auto-failover-group-configure-sql-db/add-elastic-pool-to-failover-group.png)
+
+1. Select **Select** to apply your elastic pool settings to the failover group, and then select **Create** to create your failover group. Adding the elastic pool to the failover group will automatically start the geo-replication process.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create your failover group and add your elastic pool to it using PowerShell.
+
+ ```powershell-interactive
+ $subscriptionId = "<SubscriptionID>"
+ $resourceGroupName = "<Resource-Group-Name>"
+ $location = "<Region>"
+ $adminLogin = "<Admin-Login>"
+ $password = "<Complex-Password>"
+ $serverName = "<Primary-Server-Name>"
+ $databaseName = "<Database-Name>"
+ $poolName = "myElasticPool"
+ $drLocation = "<DR-Region>"
+ $drServerName = "<Secondary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Create a failover group between the servers
+ Write-host "Creating failover group..."
+ New-AzSqlDatabaseFailoverGroup `
+ ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -PartnerServerName $drServerName `
+ FailoverGroupName $failoverGroupName `
+ FailoverPolicy Automatic `
+ -GracePeriodWithDataLossHours 2
+ Write-host "Failover group created successfully."
+
+ # Add elastic pool to the failover group
+ Write-host "Enumerating databases in elastic pool...."
+ $FailoverGroup = Get-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -FailoverGroupName $failoverGroupName
+ $databases = Get-AzSqlElasticPoolDatabase `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $serverName `
+ -ElasticPoolName $poolName
+ Write-host "Adding databases to failover group..."
+ $failoverGroup = $failoverGroup | Add-AzSqlDatabaseToFailoverGroup `
+ -Database $databases
+ Write-host "Databases added to failover group successfully."
+ ```
+++
+## Test failover
+
+Test failover of your elastic pool using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Fail your failover group over to the secondary server, and then fail back using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the elastic pool you want to add to the failover group.
+1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
+
+ ![Open server for elastic pool](./media/auto-failover-group-configure-sql-db/server-for-elastic-pool.png)
+1. Select **Failover groups** under the **Settings** pane and then choose the failover group you created in section 2.
+
+ ![Select the failover group from the portal](./media/auto-failover-group-configure-sql-db/select-failover-group.png)
+
+1. Review which server is primary, and which server is secondary.
+1. Select **Failover** from the task pane to fail over your failover group containing your elastic pool.
+1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
+
+ ![Fail over your failover group containing your database](./media/auto-failover-group-configure-sql-db/failover-sql-db.png)
+
+1. Review which server is primary, which server is secondary. If failover succeeded, the two servers should have swapped roles.
+1. Select **Failover** again to fail the failover group back to the original settings.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+Check the role of the secondary replica:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Check role of secondary replica
+ Write-host "Confirming the secondary replica is secondary...."
+ (Get-AzSqlDatabaseFailoverGroup `
+ -FailoverGroupName $failoverGroupName `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName).ReplicationRole
+ ```
+
+Fail over to the secondary server:
+
+ ```powershell-interactive
+ # Set variables
+ $resourceGroupName = "<Resource-Group-Name>"
+ $serverName = "<Primary-Server-Name>"
+ $failoverGroupName = "<Failover-Group-Name>"
+
+ # Failover to secondary server
+ Write-host "Failing over failover group to the secondary..."
+ Switch-AzSqlDatabaseFailoverGroup `
+ -ResourceGroupName $resourceGroupName `
+ -ServerName $drServerName `
+ -FailoverGroupName $failoverGroupName
+ Write-host "Failed failover group to successfully to" $drServerName
+ ```
+++
+> [!IMPORTANT]
+> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
++
+## Use Private Link
+
+Using a private link allows you to associate a logical server to a specific private IP address within the virtual network and subnet.
+
+To use a private link with your failover group, do the following:
+
+1. Ensure your primary and secondary servers are in a [paired region](../../availability-zones/cross-region-replication-azure.md).
+1. Create the virtual network and subnet in each region to host private endpoints for primary and secondary servers such that they have non-overlapping IP address spaces. For example, the primary virtual network address range of 10.0.0.0/16 and the secondary virtual network address range of 10.0.0.1/16 overlaps. For more information about virtual network address ranges, see the blog [designing Azure virtual networks](https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/).
+1. Create a [private endpoint and Azure Private DNS zone for the primary server](../../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
+1. Create a private endpoint for the secondary server as well, but this time choose to reuse the same Private DNS zone that was created for the primary server.
+1. Once the private link is established, you can create the failover group following the steps outlined previously in this article.
++
+## Locate listener endpoint
+
+Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
+
+The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
+
+![Failover group connection string](./media/auto-failover-group-configure-sql-db/find-failover-group-connection-string.png)
+
+## <a name="changing-secondary-region-of-the-failover-group"></a> Change the secondary region
+
+To illustrate the change sequence, we will assume that server A is the primary server, server B is the existing secondary server, and server C is the new secondary in the third region. To make the transition, follow these steps:
+
+1. Create additional secondaries of each database on server A to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server A will have two secondaries, one on server B and one on server C. This will guarantee that the primary databases remain protected during the transition.
+1. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
+1. Re-create the failover group with the same name between servers A and C.
+1. Add all primary databases on server A to the new failover group. At this point the login attempts will stop failing.
+1. Delete server B. All databases on B will be deleted automatically.
+
+## <a name="changing-primary-region-of-the-failover-group"></a> Change the primary region
+
+To illustrate the change sequence, we will assume server A is the primary server, server B is the existing secondary server, and server C is the new primary in the third region. To make the transition, follow these steps:
+
+1. Perform a planned geo-failover to switch the primary server to B. Server A will become the new secondary server. The failover may result in several minutes of downtime. The actual time will depend on the size of failover group.
+1. Create additional secondaries of each database on server B to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server B will have two secondaries, one on server A and one on server C. This will guarantee that the primary databases remain protected during the transition.
+1. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
+1. Re-create the failover group with the same name between servers B and C.
+1. Add all primary databases on B to the new failover group. At this point the login attempts will stop failing.
+1. Perform a planned geo-failover of the failover group to switch B and C. Now server C will become the primary and B the secondary. All secondary databases on server A will be automatically linked to the primaries on C. As in step 1, the failover may result in several minutes of downtime.
+1. Delete server A. All databases on A will be deleted automatically.
+
+> [!IMPORTANT]
+> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
+
+## Permissions
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/auto-failover-group-overview.md
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+
+The following table lists specific permission scopes for Azure SQL Database:
+
+| **Action** | **Permission** | **Scope**|
+| :- | :- | :- |
+| **Create failover group**| Azure RBAC write access | Primary server </br> Secondary server </br> All databases in failover group |
+| **Update failover group** | Azure RBAC write access | Failover group </br> All databases on the current primary server|
+| **Fail over failover group** | Azure RBAC write access | Failover group on new server |
+| | |
+
+## Remarks
+
+- Removing a failover group for a single or pooled database does not stop replication, and it does not delete the replicated database. You will need to manually stop geo-replication and delete the database from the secondary server if you want to add a single or pooled database back to a failover group after it's been removed. Failing to do either may result in an error similar to `The operation cannot be performed due to multiple errors` when attempting to add the database to the failover group.
+
+## Next steps
+
+For detailed steps configuring a failover group, see the following tutorials:
+
+- [Add a single database to a failover group](failover-group-add-single-database-tutorial.md)
+- [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
+- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
+
+For an overview of Azure SQL Database high availability options, see [geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md).
azure-sql Auto Failover Group Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-configure.md
- Title: Configure a failover group-
-description: Learn how to configure an auto-failover group for an Azure SQL Database (both single and pooled) and SQL Managed Instance, using the Azure portal, the Azure CLI, and PowerShell.
-------- Previously updated : 08/14/2019-
-# Configure a failover group for Azure SQL Database
-
-This topic teaches you how to configure an [auto-failover group](auto-failover-group-overview.md) for Azure SQL Database and Azure SQL Managed Instance.
-
-## Single database
-
-Create the failover group and add a single database to it using the Azure portal or PowerShell.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The server login and firewall settings for the secondary server must match that of your primary server.-
-### Create failover group
-
-# [Portal](#tab/azure-portal)
-
-Create your failover group and add your single database to it using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the database you want to add to the failover group.
-1. Select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for single db](./media/auto-failover-group-configure/open-sql-db-server.png)
-
-1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
-
- ![Add new failover group](./media/auto-failover-group-configure/sqldb-add-new-failover-group.png)
-
-1. On the **Failover Group** page, enter or select the required values, and then select **Create**.
-
- - **Databases within the group**: Choose the database you want to add to your failover group. Adding the database to the failover group will automatically start the geo-replication process.
-
- ![Add SQL Database to failover group](./media/auto-failover-group-configure/add-sqldb-to-failover-group.png)
-
-# [PowerShell](#tab/azure-powershell)
-
-Create your failover group and add your database to it using PowerShell.
-
- ```powershell-interactive
- $subscriptionId = "<SubscriptionID>"
- $resourceGroupName = "<Resource-Group-Name>"
- $location = "<Region>"
- $adminLogin = "<Admin-Login>"
- $password = "<Complex-Password>"
- $serverName = "<Primary-Server-Name>"
- $databaseName = "<Database-Name>"
- $drLocation = "<DR-Region>"
- $drServerName = "<Secondary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Create a secondary server in the failover region
- Write-host "Creating a secondary server in the failover region..."
- $drServer = New-AzSqlServer -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -Location $drLocation `
- -SqlAdministratorCredentials $(New-Object -TypeName System.Management.Automation.PSCredential `
- -ArgumentList $adminlogin, $(ConvertTo-SecureString -String $password -AsPlainText -Force))
- $drServer
-
- # Create a failover group between the servers
- $failovergroup = Write-host "Creating a failover group between the primary and secondary server..."
- New-AzSqlDatabaseFailoverGroup `
- ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -PartnerServerName $drServerName `
- FailoverGroupName $failoverGroupName `
- FailoverPolicy Automatic `
- -GracePeriodWithDataLossHours 2
- $failovergroup
-
- # Add the database to the failover group
- Write-host "Adding the database to the failover group..."
- Get-AzSqlDatabase `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -DatabaseName $databaseName | `
- Add-AzSqlDatabaseToFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- Write-host "Successfully added the database to the failover group..."
- ```
---
-### Test failover
-
-Test failover of your failover group using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Test failover of your failover group using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the database you want to add to the failover group.
-
- ![Open server for single db](./media/auto-failover-group-configure/open-sql-db-server.png)
-
-1. Select **Failover groups** under the **Settings** pane and then choose the failover group you just created.
-
- ![Select the failover group from the portal](./media/auto-failover-group-configure/select-failover-group.png)
-
-1. Review which server is primary and which server is secondary.
-1. Select **Failover** from the task pane to fail over your failover group containing your database.
-1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
-
- ![Fail over your failover group containing your database](./media/auto-failover-group-configure/failover-sql-db.png)
-
-1. Review which server is now primary and which server is secondary. If failover succeeded, the two servers should have swapped roles.
-1. Select **Failover** again to fail the servers back to their original roles.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
-Check the role of the secondary replica:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Check role of secondary replica
- Write-host "Confirming the secondary replica is secondary...."
- (Get-AzSqlDatabaseFailoverGroup `
- -FailoverGroupName $failoverGroupName `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName).ReplicationRole
- ```
-
-Fail over to the secondary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Failover to secondary server
- Write-host "Failing over failover group to the secondary..."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group to successfully to" $drServerName
- ```
-
-Revert failover group back to the primary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Revert failover to primary server
- Write-host "Failing over failover group to the primary...."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group successfully to back to" $serverName
- ```
---
-> [!IMPORTANT]
-> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
-
-## Elastic pool
-
-Create the failover group and add an elastic pool to it using the Azure portal, or PowerShell.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The server login and firewall settings for the secondary server must match that of your primary server.-
-### Create the failover group
-
-Create the failover group for your elastic pool using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Create your failover group and add your elastic pool to it using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the elastic pool you want to add to the failover group.
-1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for elastic pool](./media/auto-failover-group-configure/server-for-elastic-pool.png)
-
-1. Select **Failover groups** under the **Settings** pane, and then select **Add group** to create a new failover group.
-
- ![Add new failover group](./media/auto-failover-group-configure/sqldb-add-new-failover-group.png)
-
-1. On the **Failover Group** page, enter or select the required values, and then select **Create**. Either create a new secondary server, or select an existing secondary server.
-
-1. Select **Databases within the group** then choose the elastic pool you want to add to the failover group. If an elastic pool does not already exist on the secondary server, a warning appears prompting you to create an elastic pool on the secondary server. Select the warning, and then select **OK** to create the elastic pool on the secondary server.
-
- ![Add elastic pool to failover group](./media/auto-failover-group-configure/add-elastic-pool-to-failover-group.png)
-
-1. Select **Select** to apply your elastic pool settings to the failover group, and then select **Create** to create your failover group. Adding the elastic pool to the failover group will automatically start the geo-replication process.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create your failover group and add your elastic pool to it using PowerShell.
-
- ```powershell-interactive
- $subscriptionId = "<SubscriptionID>"
- $resourceGroupName = "<Resource-Group-Name>"
- $location = "<Region>"
- $adminLogin = "<Admin-Login>"
- $password = "<Complex-Password>"
- $serverName = "<Primary-Server-Name>"
- $databaseName = "<Database-Name>"
- $poolName = "myElasticPool"
- $drLocation = "<DR-Region>"
- $drServerName = "<Secondary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Create a failover group between the servers
- Write-host "Creating failover group..."
- New-AzSqlDatabaseFailoverGroup `
- ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -PartnerServerName $drServerName `
- FailoverGroupName $failoverGroupName `
- FailoverPolicy Automatic `
- -GracePeriodWithDataLossHours 2
- Write-host "Failover group created successfully."
-
- # Add elastic pool to the failover group
- Write-host "Enumerating databases in elastic pool...."
- $FailoverGroup = Get-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -FailoverGroupName $failoverGroupName
- $databases = Get-AzSqlElasticPoolDatabase `
- -ResourceGroupName $resourceGroupName `
- -ServerName $serverName `
- -ElasticPoolName $poolName
- Write-host "Adding databases to failover group..."
- $failoverGroup = $failoverGroup | Add-AzSqlDatabaseToFailoverGroup `
- -Database $databases
- Write-host "Databases added to failover group successfully."
- ```
---
-### Test failover
-
-Test failover of your elastic pool using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Fail your failover group over to the secondary server, and then fail back using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type "Azure SQL" in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the elastic pool you want to add to the failover group.
-1. On the **Overview** pane, select the name of the server under **Server name** to open the settings for the server.
-
- ![Open server for elastic pool](./media/auto-failover-group-configure/server-for-elastic-pool.png)
-1. Select **Failover groups** under the **Settings** pane and then choose the failover group you created in section 2.
-
- ![Select the failover group from the portal](./media/auto-failover-group-configure/select-failover-group.png)
-
-1. Review which server is primary, and which server is secondary.
-1. Select **Failover** from the task pane to fail over your failover group containing your elastic pool.
-1. Select **Yes** on the warning that notifies you that TDS sessions will be disconnected.
-
- ![Fail over your failover group containing your database](./media/auto-failover-group-configure/failover-sql-db.png)
-
-1. Review which server is primary, which server is secondary. If failover succeeded, the two servers should have swapped roles.
-1. Select **Failover** again to fail the failover group back to the original settings.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
-Check the role of the secondary replica:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Check role of secondary replica
- Write-host "Confirming the secondary replica is secondary...."
- (Get-AzSqlDatabaseFailoverGroup `
- -FailoverGroupName $failoverGroupName `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName).ReplicationRole
- ```
-
-Fail over to the secondary server:
-
- ```powershell-interactive
- # Set variables
- $resourceGroupName = "<Resource-Group-Name>"
- $serverName = "<Primary-Server-Name>"
- $failoverGroupName = "<Failover-Group-Name>"
-
- # Failover to secondary server
- Write-host "Failing over failover group to the secondary..."
- Switch-AzSqlDatabaseFailoverGroup `
- -ResourceGroupName $resourceGroupName `
- -ServerName $drServerName `
- -FailoverGroupName $failoverGroupName
- Write-host "Failed failover group to successfully to" $drServerName
- ```
---
-> [!IMPORTANT]
-> If you need to delete the secondary database, remove it from the failover group before deleting it. Deleting a secondary database before it is removed from the failover group can cause unpredictable behavior.
-
-## SQL Managed Instance
-
-Create a failover group between two managed instances in Azure SQL Managed Instance by using the Azure portal or PowerShell.
-
-You will need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
-
-Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. Managed instances residing in geo-paired regions have much better performance compared to unpaired regions.
-
-### Prerequisites
-
-Consider the following prerequisites:
--- The secondary managed instance must be empty.-- The subnet range for the secondary virtual network must not overlap the subnet range of the primary virtual network.-- The collation and timezone of the secondary managed instance must match that of the primary managed instance.-- When connecting the two gateways, the **Shared Key** should be the same for both connections.-
-### Create primary virtual network gateway
-
-If you have not configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
-
-> [!NOTE]
-> The SKU of the gateway affects throughput performance. This article deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
-
-# [Portal](#tab/azure-portal)
-
-Create the primary virtual network gateway using the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com), go to your resource group and select the **Virtual network** resource for your primary managed instance.
-1. Select **Subnets** under **Settings** and then select to add a new **Gateway subnet**. Leave the default values.
-
- ![Add gateway for primary managed instance](./media/auto-failover-group-configure/add-subnet-gateway-primary-vnet.png)
-
-1. Once the subnet gateway is created, select **Create a resource** from the left navigation pane and then type `Virtual network gateway` in the search box. Select the **Virtual network gateway** resource published by **Microsoft**.
-
- ![Create a new virtual network gateway](./media/auto-failover-group-configure/create-virtual-network-gateway.png)
-
-1. Fill out the required fields to configure the gateway your primary managed instance.
-
- The following table shows the values necessary for the gateway for the primary managed instance:
-
- | **Field** | Value |
- | | |
- | **Subscription** | The subscription where your primary managed instance is. |
- | **Name** | The name for your virtual network gateway. |
- | **Region** | The region where your primary managed instance is. |
- | **Gateway type** | Select **VPN**. |
- | **VPN Type** | Select **Route-based** |
- | **SKU**| Leave default of `VpnGw1`. |
- | **Location**| The location where your secondary managed instance and secondary virtual network is. |
- | **Virtual network**| Select the virtual network for your secondary managed instance. |
- | **Public IP address**| Select **Create new**. |
- | **Public IP address name**| Enter a name for your IP address. |
- | &nbsp; | &nbsp; |
-
-1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
-
- ![Primary gateway settings](./media/auto-failover-group-configure/settings-for-primary-gateway.png)
-
-1. Select **Create** to create your new virtual network gateway.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the primary virtual network gateway using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $primaryVnetName = "<Primary-Virtual-Network-Name>"
- $primaryGWName = "<Primary-Gateway-Name>"
- $primaryGWPublicIPAddress = $primaryGWName + "-ip"
- $primaryGWIPConfig = $primaryGWName + "-ipc"
- $primaryGWAsn = 61000
-
- # Get the primary virtual network
- $vnet1 = Get-AzVirtualNetwork -Name $primaryVnetName -ResourceGroupName $primaryResourceGroupName
- $primaryLocation = $vnet1.Location
-
- # Create primary gateway
- Write-host "Creating primary gateway..."
- $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet1
- $gwpip1= New-AzPublicIpAddress -Name $primaryGWPublicIPAddress -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -AllocationMethod Dynamic
- $gwipconfig1 = New-AzVirtualNetworkGatewayIpConfig -Name $primaryGWIPConfig `
- -SubnetId $subnet1.Id -PublicIpAddressId $gwpip1.Id
-
- $gw1 = New-AzVirtualNetworkGateway -Name $primaryGWName -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -IpConfigurations $gwipconfig1 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $primaryGWAsn
- $gw1
- ```
---
-### Create secondary virtual network gateway
-
-Create the secondary virtual network gateway using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Repeat the steps in the previous section to create the virtual network subnet and gateway for the secondary managed instance. Fill out the required fields to configure the gateway for your secondary managed instance.
-
-The following table shows the values necessary for the gateway for the secondary managed instance:
-
- | **Field** | Value |
- | | |
- | **Subscription** | The subscription where your secondary managed instance is. |
- | **Name** | The name for your virtual network gateway, such as `secondary-mi-gateway`. |
- | **Region** | The region where your secondary managed instance is. |
- | **Gateway type** | Select **VPN**. |
- | **VPN Type** | Select **Route-based** |
- | **SKU**| Leave default of `VpnGw1`. |
- | **Location**| The location where your secondary managed instance and secondary virtual network is. |
- | **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. |
- | **Public IP address**| Select **Create new**. |
- | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
- | &nbsp; | &nbsp; |
-
- ![Secondary gateway settings](./media/auto-failover-group-configure/settings-for-secondary-gateway.png)
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the secondary virtual network gateway using PowerShell.
-
- ```powershell-interactive
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $secondaryVnetName = "<Secondary-Virtual-Network-Name>"
- $secondaryGWName = "<Secondary-Gateway-Name>"
- $secondaryGWPublicIPAddress = $secondaryGWName + "-IP"
- $secondaryGWIPConfig = $secondaryGWName + "-ipc"
- $secondaryGWAsn = 62000
-
- # Get the secondary virtual network
- $vnet2 = Get-AzVirtualNetwork -Name $secondaryVnetName -ResourceGroupName $secondaryResourceGroupName
- $secondaryLocation = $vnet2.Location
-
- # Create the secondary gateway
- Write-host "Creating secondary gateway..."
- $subnet2 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet2
- $gwpip2= New-AzPublicIpAddress -Name $secondaryGWPublicIPAddress -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -AllocationMethod Dynamic
- $gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name $secondaryGWIPConfig `
- -SubnetId $subnet2.Id -PublicIpAddressId $gwpip2.Id
-
- $gw2 = New-AzVirtualNetworkGateway -Name $secondaryGWName -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -IpConfigurations $gwipconfig2 -GatewayType Vpn `
- -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $secondaryGWAsn
-
- $gw2
- ```
---
-### Connect the gateways
-
-Create connections between the two gateways using the Azure portal or PowerShell.
-
-Two connections need to be created - the connection from the primary gateway to the secondary gateway, and then the connection from the secondary gateway to the primary gateway.
-
-The shared key used for both connections should be the same for each connection.
-
-# [Portal](#tab/azure-portal)
-
-Create connections between the two gateways using the Azure portal.
-
-1. Select **Create a resource** from the [Azure portal](https://portal.azure.com).
-1. Type `connection` in the search box and then press enter to search, which takes you to the **Connection** resource, published by Microsoft.
-1. Select **Create** to create your connection.
-1. On the **Basics** tab, select the following values and then select **OK**.
- 1. Select `VNet-to-VNet` for the **Connection type**.
- 1. Select your subscription from the drop-down.
- 1. Select the resource group for your managed instance in the drop-down.
- 1. Select the location of your primary managed instance from the drop-down.
-1. On the **Settings** tab, select or enter the following values and then select **OK**:
- 1. Choose the primary network gateway for the **First virtual network gateway**, such as `Primary-Gateway`.
- 1. Choose the secondary network gateway for the **Second virtual network gateway**, such as `Secondary-Gateway`.
- 1. Select the checkbox next to **Establish bidirectional connectivity**.
- 1. Either leave the default primary connection name, or rename it to a value of your choice.
- 1. Provide a **Shared key (PSK)** for the connection, such as `mi1m2psk`.
-
- ![Create gateway connection](./media/auto-failover-group-configure/create-gateway-connection.png)
-
-1. On the **Summary** tab, review the settings for your bidirectional connection and then select **OK** to create your connection.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create connections between the two gateways using PowerShell.
-
- ```powershell-interactive
- $vpnSharedKey = "mi1mi2psk"
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $primaryGWConnection = "<Primary-connection-name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $secondaryGWConnection = "<Secondary-connection-name>"
- $secondaryLocation = "<Secondary-Region>"
-
- # Connect the primary to secondary gateway
- Write-host "Connecting the primary gateway"
- New-AzVirtualNetworkGatewayConnection -Name $primaryGWConnection -ResourceGroupName $primaryResourceGroupName `
- -VirtualNetworkGateway1 $gw1 -VirtualNetworkGateway2 $gw2 -Location $primaryLocation `
- -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
- $primaryGWConnection
-
- # Connect the secondary to primary gateway
- Write-host "Connecting the secondary gateway"
-
- New-AzVirtualNetworkGatewayConnection -Name $secondaryGWConnection -ResourceGroupName $secondaryResourceGroupName `
- -VirtualNetworkGateway1 $gw2 -VirtualNetworkGateway2 $gw1 -Location $secondaryLocation `
- -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
- $secondaryGWConnection
- ```
---
-### Create the failover group
-
-Create the failover group for your managed instances by using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Create the failover group for your SQL Managed Instances by using the Azure portal.
-
-1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
-1. Select the primary managed instance you want to add to the failover group.
-1. Under **Settings**, navigate to **Instance Failover Groups** and then choose to **Add group** to open the **Instance Failover Group** page.
-
- ![Add a failover group](./media/auto-failover-group-configure/add-failover-group.png)
-
-1. On the **Instance Failover Group** page, type the name of your failover group and then choose the secondary managed instance from the drop-down. Select **Create** to create your failover group.
-
- ![Create failover group](./media/auto-failover-group-configure/create-failover-group.png)
-
-1. Once failover group deployment is complete, you will be taken back to the **Failover group** page.
-
-# [PowerShell](#tab/azure-powershell)
-
-Create the failover group for your managed instances using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $failoverGroupName = "<Failover-Group-Name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryLocation = "<Secondary-Region>"
- $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
- $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
-
- # Create failover group
- Write-host "Creating the failover group..."
- $failoverGroup = New-AzSqlDatabaseInstanceFailoverGroup -Name $failoverGroupName `
- -Location $primaryLocation -ResourceGroupName $primaryResourceGroupName -PrimaryManagedInstanceName $primaryManagedInstance `
- -PartnerRegion $secondaryLocation -PartnerManagedInstanceName $secondaryManagedInstance `
- -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1
- $failoverGroup
- ```
---
-### Test failover
-
-Test failover of your failover group using the Azure portal or PowerShell.
-
-# [Portal](#tab/azure-portal)
-
-Test failover of your failover group using the Azure portal.
-
-1. Navigate to your _secondary_ managed instance within the [Azure portal](https://portal.azure.com) and select **Instance Failover Groups** under settings.
-1. Review which managed instance is the primary, and which managed instance is the secondary.
-1. Select **Failover** and then select **Yes** on the warning about TDS sessions being disconnected.
-
- ![Fail over the failover group](./media/auto-failover-group-configure/failover-mi-failover-group.png)
-
-1. Review which manged instance is the primary and which instance is the secondary. If failover succeeded, the two instances should have switched roles.
-
- ![Managed instances have switched roles after failover](./media/auto-failover-group-configure/mi-switched-after-failover.png)
-
-1. Go to the new _secondary_ managed instance and select **Failover** once again to fail the primary instance back to the primary role.
-
-# [PowerShell](#tab/azure-powershell)
-
-Test failover of your failover group using PowerShell.
-
- ```powershell-interactive
- $primaryResourceGroupName = "<Primary-Resource-Group>"
- $secondaryResourceGroupName = "<Secondary-Resource-Group>"
- $failoverGroupName = "<Failover-Group-Name>"
- $primaryLocation = "<Primary-Region>"
- $secondaryLocation = "<Secondary-Region>"
- $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
- $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
-
- # Failover the primary managed instance to the secondary role
- Write-host "Failing primary over to the secondary location"
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $secondaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
- Write-host "Successfully failed failover group to secondary location"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
-
- # Fail primary managed instance back to primary role
- Write-host "Failing primary back to primary role"
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $primaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
- Write-host "Successfully failed failover group to primary location"
-
- # Verify the current primary role
- Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
- -Location $secondaryLocation -Name $failoverGroupName
- ```
---
-## Use Private Link
-
-Using a private link allows you to associate a logical server to a specific private IP address within the virtual network and subnet.
-
-To use a private link with your failover group, do the following:
-
-1. Ensure your primary and secondary servers are in a [paired region](../../availability-zones/cross-region-replication-azure.md).
-1. Create the virtual network and subnet in each region to host private endpoints for primary and secondary servers such that they have non-overlapping IP address spaces. For example, the primary virtual network address range of 10.0.0.0/16 and the secondary virtual network address range of 10.0.0.1/16 overlaps. For more information about virtual network address ranges, see the blog [designing Azure virtual networks](https://devblogs.microsoft.com/premier-developer/understanding-cidr-notation-when-designing-azure-virtual-networks-and-subnets/).
-1. Create a [private endpoint and Azure Private DNS zone for the primary server](../../private-link/create-private-endpoint-portal.md#create-a-private-endpoint).
-1. Create a private endpoint for the secondary server as well, but this time choose to reuse the same Private DNS zone that was created for the primary server.
-1. Once the private link is established, you can create the failover group following the steps outlined previously in this article.
--
-## Locate listener endpoint
-
-Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
-
-The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
-
-![Failover group connection string](./media/auto-failover-group-configure/find-failover-group-connection-string.png)
-
-## Remarks
--- Removing a failover group for a single or pooled database does not stop replication, and it does not delete the replicated database. You will need to manually stop geo-replication and delete the database from the secondary server if you want to add a single or pooled database back to a failover group after it's been removed. Failing to do either may result in an error similar to `The operation cannot be performed due to multiple errors` when attempting to add the database to the failover group.-
-## Next steps
-
-For detailed steps configuring a failover group, see the following tutorials:
--- [Add a single database to a failover group](failover-group-add-single-database-tutorial.md)-- [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)-- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)-
-For an overview of Azure SQL Database high availability options, see [geo-replication](active-geo-replication-overview.md) and [auto-failover groups](auto-failover-group-overview.md).
azure-sql Auto Failover Group Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-overview.md
- Title: Auto-failover groups-
-description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of a group of databases on a server, or all databases on a managed instance.
-------- Previously updated : 2/2/2022--
-# Use auto-failover groups to enable transparent and coordinated geo-failover of multiple databases
-
-The auto-failover groups feature allows you to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region. It is a declarative abstraction on top of the existing [active geo-replication](active-geo-replication-overview.md) feature, designed to simplify deployment and management of geo-replicated databases at scale. You can initiate a geo-failover manually or you can delegate it to the Azure service based on a user-defined policy. The latter option allows you to automatically recover multiple related databases in a secondary region after a catastrophic failure or other unplanned event that results in full or partial loss of the SQL Database or SQL Managed Instance availability in the primary region. A failover group can include one or multiple databases, typically used by the same application. Additionally, you can use the readable secondary databases to offload read-only query workloads.
-
-> [!NOTE]
-> Auto-failover groups support geo-replication of all databases in the group to only one secondary server or instance in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md).
->
-
-When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover. Typically, these are outages that cannot be automatically mitigated by the built-in high availability infrastructure. Examples of geo-failover triggers include an incident caused by a SQL Database tenant ring or control ring being down due to an OS kernel memory leak on compute nodes, or an incident caused by one or more tenant rings being down because a wrong network cable was accidentally cut during routine hardware decommissioning. For more information, see [SQL Database High Availability](high-availability-sla.md).
-
-In addition, auto-failover groups provide read-write and read-only listener end-points that remain unchanged during geo-failovers. Whether you use manual or automatic failover activation, a geo-failover switches all secondary databases in the group to the primary role. After the geo-failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region. For geo-failover RPO and RTO, see [Overview of Business Continuity](business-continuity-high-availability-disaster-recover-hadr-overview.md).
-
-When you are using auto-failover groups with automatic failover policy, an outage that impacts databases on a server or managed instance results in an automatic geo-failover.
-
-You can manage auto-failover group using:
--- [Azure portal](geo-distributed-application-configure-tutorial.md)-- [Azure CLI: Failover Group](scripts/add-database-to-failover-group-cli.md)-- [PowerShell: Failover Group](scripts/add-database-to-failover-group-powershell.md)-- [REST API: Failover group](/rest/api/sql/failovergroups)-
-When configuring a failover group, ensure that authentication and network access on the secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary. For details, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
-
-To achieve full business continuity, adding regional database redundancy is only part of the solution. Recovering an application (service) end-to-end after a catastrophic failure requires recovery of all components that constitute the service and any dependent services. Examples of these components include the client software (for example, a browser with a custom JavaScript), web front ends, storage, and DNS. It is critical that all components are resilient to the same failures and become available within the recovery time objective (RTO) of your application. Therefore, you need to identify all dependent services and understand the guarantees and capabilities they provide. Then, you must take adequate steps to ensure that your service functions during the failover of the services on which it depends. For more information about designing solutions for disaster recovery, see [Designing Cloud Solutions for Disaster Recovery Using active geo-replication](designing-cloud-solutions-for-disaster-recovery.md).
-
-## <a name="terminology-and-capabilities"></a> Failover group terminology and capabilities
--- **Failover group (FOG)**-
- A failover group is a named group of databases managed by a single server or within a managed instance that can fail over as a unit to another region in case all or some primary databases become unavailable due to an outage in the primary region. When it's created for SQL Managed Instance, a failover group contains all user databases in the instance and therefore only one failover group can be configured on an instance.
-
- > [!IMPORTANT]
- > The name of the failover group must be globally unique within the `.database.windows.net` domain.
--- **Servers**-
- Some or all of the user databases on a logical server can be placed in a failover group. Also, a server supports multiple failover groups on a single server.
--- **Primary**-
- The server or managed instance that hosts the primary databases in the failover group.
--- **Secondary**-
- The server or managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same region as the primary.
--- **Adding single databases to failover group**-
- You can put several single databases on the same server into the same failover group. If you add a single database to the failover group, it automatically creates a secondary database using the same edition and compute size on secondary server. You specified that server when the failover group was created. If you add a database that already has a secondary database in the secondary server, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary server.
-
- > [!IMPORTANT]
- > Make sure that the secondary server doesn't have a database with the same name unless it is an existing secondary database. In failover groups for SQL Managed Instance, all user databases are replicated. You cannot pick a subset of user databases for replication in the failover group.
--- **Adding databases in elastic pool to failover group**-
- You can put all or several databases within an elastic pool into the same failover group. If the primary database is in an elastic pool, the secondary is automatically created in the elastic pool with the same name (secondary pool). You must ensure that the secondary server contains an elastic pool with the same exact name and enough free capacity to host the secondary databases that will be created by the failover group. If you add a database in the pool that already has a secondary database in the secondary pool, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary pool.
-
-- **Initial Seeding**-
- When adding databases, elastic pools, or managed instances to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 500 GB an hour for SQL Database, and up to 360 GB an hour for SQL Managed Instance. Seeding is performed for all databases in parallel.
-
- For SQL Managed Instance, consider the speed of the Express Route link between the two instances when estimating the time of the initial seeding phase. If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84 GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then seeding a 100 GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth. If the network bandwidth between two instances is limited and you are adding multiple managed instances to a failover group, consider adding multiple managed instances to the failover group sequentially, one by one. Given an appropriately sized gateway SKU between the two managed instances, and if corporate network bandwidth allows it, it's possible to achieve speeds as high as 360 GB an hour.
--- **DNS zone**-
- A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any instance in the same DNS zone. The two managed instances in the same failover group must share the DNS zone.
-
- > [!NOTE]
- > A DNS zone ID is not required or used for failover groups created for SQL Database.
--- **Failover group read-write listener**-
- A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.database.windows.net`. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.<zone_id>.database.windows.net`.
--- **Failover group read-only listener**-
- A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.database.windows.net`. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.<zone_id>.database.windows.net`.
--- **Automatic failover policy**-
- By default, a failover group is configured with an automatic failover policy. The system triggers a geo-failover after the failure is detected and the grace period has expired. The system must verify that the outage cannot be mitigated by the built-in [high availability infrastructure](high-availability-sla.md), for example due to the scale of the impact. If you want to control the geo-failover workflow from the application or manually, you can turn off automatic failover policy.
-
- > [!NOTE]
- > Because verification of the scale of the outage and how quickly it can be mitigated involves human actions, the grace period cannot be set below one hour. This limitation applies to all databases in the failover group regardless of their data synchronization state.
--- **Read-only failover policy**-
- By default, the failover of the read-only listener is disabled. It ensures that the performance of the primary is not impacted when the secondary is offline. However, it also means the read-only sessions will not be able to connect until the secondary is recovered. If you cannot tolerate downtime for the read-only sessions and can use the primary for both read-only and read-write traffic at the expense of the potential performance degradation of the primary, you can enable failover for the read-only listener by configuring the `AllowReadOnlyFailoverToPrimary` property. In that case, the read-only traffic will be automatically redirected to the primary if the secondary is not available.
-
- > [!NOTE]
- > The `AllowReadOnlyFailoverToPrimary` property only has effect if automatic failover policy is enabled and an automatic geo-failover has been triggered. In that case, if the property is set to True, the new primary will serve both read-write and read-only sessions.
--- **Planned failover**-
- Planned failover performs full data synchronization between primary and secondary databases before the secondary switches to the primary role. This guarantees no data loss. Planned failover is used in the following scenarios:
-
- - Perform disaster recovery (DR) drills in production when data loss is not acceptable
- - Relocate the databases to a different region
- - Return the databases to the primary region after the outage has been mitigated (failback)
--- **Unplanned failover**-
- Unplanned or forced failover immediately switches the secondary to the primary role without waiting for recent changes to propagate from the primary. This operation may result in data loss. Unplanned failover is used as a recovery method during outages when the primary is not accessible. When the outage is mitigated, the old primary will automatically reconnect and become a new secondary. A planned failover may be executed to fail back, returning the replicas to their original primary and secondary roles.
--- **Manual failover**-
- You can initiate a geo-failover manually at any time regardless of the automatic failover configuration. During an outage that impacts the primary, if automatic failover policy is not configured, a manual failover is required to promote the secondary to the primary role. You can initiate a forced (unplanned) or friendly (planned) failover. A friendly failover is only possible when the old primary is accessible, and can be used to relocate the primary to the secondary region without data loss. When a failover is completed, the DNS records are automatically updated to ensure connectivity to the new primary.
--- **Grace period with data loss**-
- Because the secondary databases are synchronized using asynchronous replication, an automatic geo-failover may result in data loss. You can customize the automatic failover policy to reflect your applicationΓÇÖs tolerance to data loss. By configuring `GracePeriodWithDataLossHours`, you can control how long the system waits before initiating a forced failover, which may result in data loss.
--- **Multiple failover groups**-
- You can configure multiple failover groups for the same pair of servers to control the scope of geo-failovers. Each group fails over independently. If your tenant-per-database application is deployed in multiple regions and uses elastic pools, you can use this capability to mix primary and secondary databases in each pool. This way you may be able to reduce the impact of an outage to only some tenant databases.
-
- > [!NOTE]
- > SQL Managed Instance does not support multiple failover groups.
-
-## Permissions
-
-Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md). The [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role has all the necessary permissions to manage failover groups.
-
-### <a name="create-failover-group"></a> Create a failover group
-
-To create a failover group, you need Azure RBAC write access to both the primary and secondary servers, and to all databases in the failover group. For a SQL Managed Instance, you need Azure RBAC write access to both the primary and secondary SQL Managed Instance, but permissions on individual databases are not relevant, because individual SQL Managed Instance databases cannot be added to or removed from a failover group.
-
-### Update a failover group
-
-To update a failover group, you need Azure RBAC write access to the failover group, and all databases on the current primary server or managed instance.
-
-### Fail over a failover group
-
-To fail over a failover group, you need Azure RBAC write access to the failover group on the new primary server or managed instance.
-
-## <a name="best-practices-for-sql-database"></a> Failover group best practices for SQL Database
-
-The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.
-
-![Diagram shows a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.](./media/auto-failover-group-overview/auto-failover-group.png)
-
-> [!NOTE]
-> See [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md) for a detailed step-by-step tutorial adding a database in SQL Database to a failover group.
-
-When designing a service with business continuity in mind, follow these general guidelines:
-
-### <a name="using-one-or-several-failover-groups-to-manage-failover-of-multiple-databases"></a> Use one or several failover groups to manage failover of multiple databases
-
-One or many failover groups can be created between two servers in different regions (primary and secondary servers). Each group can include one or several databases that are recovered as a unit in case all or some primary databases become unavailable due to an outage in the primary region. Creating a failover group creates geo-secondary databases with the same service objective as the primary. If you add an existing geo-replication relationship to a failover group, make sure the geo-secondary is configured with the same service tier and compute size as the primary.
-
-### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to primary
-
-For read-write workloads, use `<fog-name>.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the primary. This name does not change after failover. Note the failover involves updating the DNS record so the client connections are redirected to the new primary only after the client DNS cache is refreshed. The time to live (TTL) of the primary and secondary listener DNS record is 30 seconds.
-
-### <a name="using-read-only-listener-for-read-only-workload"></a> Use the read-only listener to connect to geo-secondary
-
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. For read-only sessions, use `<fog-name>.secondary.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the geo-secondary. It is also recommended that you indicate read intent in the connection string by using `ApplicationIntent=ReadOnly`.
-
-> [!NOTE]
-> In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
->
-> - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.database.windows.net`.
-> - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.database.windows.net`.
-
-### <a name="preparing-for-performance-degradation"></a> Potential performance degradation after geo-failover
-
-A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary (DR) region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the DR region, follow these [network security guidelines](#failover-groups-and-network-security), and orchestrate the geo-failover of relevant application components together with the database.
-
-### <a name="preparing-for-data-loss"></a> Potential data loss after geo-failover
-
-If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, the system waits for the period you specified by `GracePeriodWithDataLossHours` before initiating an automatic geo-failover. The default value is 1 hour. This favors database availability over no data loss. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
-
-> [!IMPORTANT]
-> Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more extensive data loss in an outage. This lag can be monitored using [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database). If these issues occur, then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated databases in the pool.
-
-### <a name="changing-secondary-region-of-the-failover-group"></a> Change the secondary region of a failover group
-
-To illustrate the change sequence, we will assume that server A is the primary server, server B is the existing secondary server, and server C is the new secondary in the third region. To make the transition, follow these steps:
-
-1. Create additional secondaries of each database on server A to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server A will have two secondaries, one on server B and one on server C. This will guarantee that the primary databases remain protected during the transition.
-2. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
-3. Re-create the failover group with the same name between servers A and C.
-4. Add all primary databases on server A to the new failover group. At this point the login attempts will stop failing.
-5. Delete server B. All databases on B will be deleted automatically.
-
-### <a name="changing-primary-region-of-the-failover-group"></a> Change the primary region of a failover group
-
-To illustrate the change sequence, we will assume server A is the primary server, server B is the existing secondary server, and server C is the new primary in the third region. To make the transition, follow these steps:
-
-1. Perform a planned geo-failover to switch the primary server to B. Server A will become the new secondary server. The failover may result in several minutes of downtime. The actual time will depend on the size of failover group.
-2. Create additional secondaries of each database on server B to server C using [active geo-replication](active-geo-replication-overview.md). Each database on server B will have two secondaries, one on server A and one on server C. This will guarantee that the primary databases remain protected during the transition.
-3. Delete the failover group. At this point login attempts using failover group endpoints will be failing.
-4. Re-create the failover group with the same name between servers B and C.
-5. Add all primary databases on B to the new failover group. At this point the login attempts will stop failing.
-6. Perform a planned geo-failover of the failover group to switch B and C. Now server C will become the primary and B the secondary. All secondary databases on server A will be automatically linked to the primaries on C. As in step 1, the failover may result in several minutes of downtime.
-7. Delete server A. All databases on A will be deleted automatically.
-
-> [!IMPORTANT]
-> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group or a server DNS alias with the same name. Because failover group names and DNS aliases must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
-
-## <a name="best-practices-for-sql-managed-instance"></a> Failover group best practices for SQL Managed Instance
-
-The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All user databases in the instance will be replicated to the secondary instance. System databases like _master_ and _msdb_ will not be replicated.
-
-The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed instance and auto-failover group.
-
-![auto failover diagram](./media/auto-failover-group-overview/auto-failover-group-mi.png)
-
-> [!NOTE]
-> See [Add managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md) for a detailed step-by-step tutorial adding a SQL Managed Instance to use failover group.
-
-> [!IMPORTANT]
-> If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic should go directly between the two managed instance subnets rather than be directed through the hub networks.
-
-If your application uses SQL Managed Instance as the data tier, follow these general guidelines when designing for business continuity:
-
-### <a name="creating-the-secondary-instance"></a> Create the geo-secondary managed instance
-
-To ensure non-interrupted connectivity to the primary SQL Managed Instance after failover, both the primary and secondary instances must be in the same DNS zone. It will guarantee that the same multi-domain (SAN) certificate can be used to authenticate client connections to either of the two instances in the failover group. When your application is ready for production deployment, create a secondary SQL Managed Instance in a different region and make sure it shares the DNS zone with the primary SQL Managed Instance. You can do it by specifying an optional parameter during creation. If you are using PowerShell or the REST API, the name of the optional parameter is `DNSZonePartner`. The name of the corresponding optional field in the Azure portal is *Primary Managed Instance*.
-
-> [!IMPORTANT]
-> The first managed instance created in the subnet determines DNS zone for all subsequent instances in the same subnet. This means that two instances from the same subnet cannot belong to different DNS zones.
-
-For more information about creating the secondary SQL Managed Instance in the same DNS zone as the primary instance, see [Create a secondary managed instance](../managed-instance/failover-group-add-instance-tutorial.md#create-a-secondary-managed-instance).
-
-### <a name="using-geo-paired-regions"></a> Use paired regions
-
-Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. SQL Managed Instance failover groups in paired regions have better performance compared to unpaired regions.
-
-### <a name="enabling-replication-traffic-between-two-instances"></a> Enable geo-replication traffic between two managed instances
-
-Because each managed instance is isolated in its own VNet, two-directional traffic between these VNets must be allowed. See [Azure VPN gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
-
-### <a name="creating-a-failover-group-between-managed-instances-in-different-subscriptions"></a> Create a failover group between managed instances in different subscriptions
-
-You can create a failover group between SQL Managed Instances in two different subscriptions, as long as subscriptions are associated to the same [Azure Active Directory Tenant](../../active-directory/fundamentals/active-directory-whatis.md#terminology). When using PowerShell API, you can do it by specifying the `PartnerSubscriptionId` parameter for the secondary SQL Managed Instance. When using REST API, each instance ID included in the `properties.managedInstancePairs` parameter can have its own Subscription ID.
-
-> [!IMPORTANT]
-> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover cannot be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
-
-### <a name="managing-failover-to-secondary-instance"></a> Manage geo-failover to a geo-secondary instance
-
-The failover group will manage geo-failover of all databases on the primary managed instance. When a group is created, each database in the instance will be automatically geo-replicated to the geo-secondary instance. You cannot use failover groups to initiate a partial failover of a subset of databases.
-
-> [!IMPORTANT]
-> If a database is dropped on the primary managed instance, it will also be dropped automatically on the geo-secondary managed instance.
-
-### <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener to connect to the primary managed instance
-
-For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
-
-### <a name="using-read-only-listener-to-connect-to-the-secondary-instance"></a> Use the read-only listener to connect to the geo-secondary managed instance
-
-If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name. The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](../managed-instance/public-endpoint-configure.md).
-
-> [!NOTE]
-> In the Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location.
->
-> - To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.<zone_id>.database.windows.net`.
-> - To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.<zone_id>.database.windows.net`.
-
-### Potential performance degradation after failover to the geo-secondary managed instance
-
-A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the secondary region and fail over application components together with the database. At configuration time, follow [network security guidelines](#failover-groups-and-network-security) to ensure connectivity to the database in the secondary region.
-
-### Potential data loss after failover to the geo-secondary managed instance
-
-If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, a geo-failover is triggered if there is zero data loss, to the best of our knowledge. Otherwise, failover is deferred for the period you specify using `GracePeriodWithDataLossHours`. If you configured the automatic failover policy, be prepared for data loss. In general, during outages, Azure favors availability. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
-
-The DNS update of the read-write listener will happen immediately after the failover is initiated. This operation will not result in data loss. However, the process of switching database roles can take up to 5 minutes under normal conditions. Until it is completed, some databases in the new primary instance will still be read-only. If a failover is initiated using PowerShell, the operation to switch the primary replica role is synchronous. If it is initiated using the Azure portal, the UI will indicate completion status. If it is initiated using the REST API, use standard Azure Resource ManagerΓÇÖs polling mechanism to monitor for completion.
-
-> [!IMPORTANT]
-> Use manual planned failover to move the primary back to the original location once the outage that caused the geo-failover is mitigated.
-
-### Change the secondary region of the managed instance failover group
-
-Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new secondary instance in the third region. To make the transition, follow these steps:
-
-1. Create instance C with same size as A and in the same DNS zone.
-2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway will not recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
-3. Create a failover group with the same name between instance A and C. Follow the instructions in [failover group with SQL Managed Instance tutorial](../managed-instance/failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized.
-4. Delete instance B if not needed to avoid unnecessary charges.
-
-> [!NOTE]
-> After step 2 and until step 3 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
-
-### Change the primary region of the managed instance failover group
-
-Let's assume instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new primary instance in the third region. To make the transition, follow these steps:
-
-1. Create instance C with same size as B and in the same DNS zone.
-2. Connect to instance B and manually failover to switch the primary instance to B. Instance A will become the new secondary instance automatically.
-3. Delete the failover group between instances A and B. At this point login attempts using failover group endpoints will be failing. The secondary databases on A will be disconnected from the primaries and will become read-write databases.
-4. Create a failover group with the same name between instance A and C. Follow the instructions in the [failover group with managed instance tutorial](../managed-instance/failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized. At this point login attempts will stop failing.
-5. Delete instance A if not needed to avoid unnecessary charges.
-
-> [!CAUTION]
-> After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
-
-> [!IMPORTANT]
-> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
-
-### Enable scenarios dependent on objects from the system databases
-
-System databases are **not** replicated to the secondary instance in a failover group. To enable scenarios that depend on objects from the system databases, make sure to create the same objects on the secondary instance and keep them synchronized with the primary instance.
-
-For example, if you plan to use the same logins on the secondary instance, make sure to create them with the identical SID.
-
-```SQL
Code to create login on the secondary instance
-CREATE LOGIN foo WITH PASSWORD = '<enterStrongPasswordHere>', SID = <login_sid>;
-```
-### Synchronize instance properties and retention policies between primary and secondary instance
-
-Instances in a failover group remain separate Azure resources, and no changes made to the configuration of the primary instance will be automatically replicated to the secondary instance. Make sure to perform all relevant changes both on primary _and_ secondary instance. For example, if you change backup storage redundancy or long-term backup retention policy on primary instance, make sure to change it on secondary instance as well.
-
-## Failover groups and network security
-
-For some applications the security rules require that the network access to the data tier is restricted to a specific component or components such as a VM, web service, etc. This requirement presents some challenges for business continuity design and the use of failover groups. Consider the following options when implementing such restricted access.
-
-### <a name="using-failover-groups-and-virtual-network-rules"></a> Use failover groups and virtual network service endpoints
-
-If you are using [Virtual Network service endpoints and rules](vnet-service-endpoint-rule-overview.md) to restrict access to your database in SQL Database or SQL Managed Instance, be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. Therefore, only the client applications deployed in the same region can connect to the primary database. Since a geo-failover results in the SQL Database client sessions being rerouted to a server in a different (secondary) region, these sessions will fail if originated from a client outside of that region. For that reason, the automatic failover policy cannot be enabled if the participating servers or instances are included in the Virtual Network rules. To support manual failover, follow these steps:
-
-1. Provision the redundant copies of the front-end components of your application (web service, virtual machines etc.) in the secondary region.
-2. Configure the [virtual network rules](vnet-service-endpoint-rule-overview.md) individually for primary and secondary server.
-3. Enable the [front-end failover using a Traffic manager configuration](designing-cloud-solutions-for-disaster-recovery.md#scenario-1-using-two-azure-regions-for-business-continuity-with-minimal-downtime).
-4. Initiate manual geo-failover when the outage is detected. This option is optimized for the applications that require consistent latency between the front-end and the data tier and supports recovery when either front end, data tier or both are impacted by the outage.
-
-> [!NOTE]
-> If you are using the **read-only listener** to load-balance a read-only workload, make sure that this workload is executed in a VM or other resource in the secondary region so it can connect to the secondary database.
-
-### Use failover groups and firewall rules
-
-If your business continuity plan requires failover using groups with automatic failover, you can restrict access to your database in SQL Database by using public IP firewall rules. To support automatic failover, follow these steps:
-
-1. [Create a public IP](../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address)
-2. [Create a public load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and assign the public IP to it.
-3. [Create a virtual network and the virtual machines](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) for your front-end components.
-4. [Create network security group](../../virtual-network/network-security-groups-overview.md) and configure inbound connections.
-5. Ensure that the outbound connections are open to Azure SQL Database in a region by using an `Sql.<Region>` [service tag](../../virtual-network/network-security-groups-overview.md#service-tags).
-6. Create a [SQL Database firewall rule](firewall-configure.md) to allow inbound traffic from the public IP address you create in step 1.
-
-For more information on how to configure outbound access and what IP to use in the firewall rules, see [Load balancer outbound connections](../../load-balancer/load-balancer-outbound-connections.md).
-
-The above configuration will ensure that an automatic geo-failover will not block connections from the front-end components and assumes that the application can tolerate the longer latency between the front end and the data tier.
-
-> [!IMPORTANT]
-> To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end components and databases.
-
-## <a name="enabling-geo-replication-between-managed-instances-and-their-vnets"></a> Enabling geo-replication between managed instance virtual networks
-
-When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. To allow replication traffic between these VNets ensure these prerequisites are met:
--- The two instances of SQL Managed Instance need to be in different Azure regions.-- The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.-- Your secondary instance of SQL Managed Instance must be empty (no user databases).-- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.-
- > [!IMPORTANT]
- > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring non-default [maintenance window](./maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
--- The two SQL Managed Instance VNets cannot have overlapping IP addresses.-- You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.-
- > [!IMPORTANT]
- > Misconfigured NSG security rules leads to stuck database seeding operations.
--- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone cannot be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.-
- > [!NOTE]
- > For a detailed tutorial on configuring failover groups with SQL Managed Instance, see [add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md).
-
-## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
-
-You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
-
-This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
-
-> [!NOTE]
-> If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
-
-## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
-
-Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
-
-> [!NOTE]
-> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
-
-## Failover groups and point-in-time restore
-
-For information about using point-in-time restore with failover groups, see [Point in Time Recovery (PITR)](recovery-using-backups.md#point-in-time-restore).
-
-## Limitations of failover groups
-
-Be aware of the following limitations:
--- Failover groups cannot be created between two servers or instances in the same Azure region.-- Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.-- Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.-- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance.-- Failover groups cannot be created between instances if any of them are in an instance pool.-
-## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
-
-As discussed previously, auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs require the use of resource groups and support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
-
-### <a name="manage-sql-database-failover"></a> Manage SQL Database geo-failover
-
-# [PowerShell](#tab/azure-powershell)
-
-| Cmdlet | Description |
-| | |
-| [New-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/new-azsqldatabasefailovergroup) |This command creates a failover group and registers it on both primary and secondary servers|
-| [Remove-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/remove-azsqldatabasefailovergroup) | Removes a failover group from the server |
-| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Retrieves a failover group's configuration |
-| [Set-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/set-azsqldatabasefailovergroup) |Modifies configuration of a failover group |
-| [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup) | Triggers failover of a failover group to the secondary server |
-| [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup)|Adds one or more databases to a failover group|
-
-# [Azure CLI](#tab/azure-cli)
-
-| Command | Description |
-| | |
-| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
-| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
-| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
-| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
-| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-
-# [REST API](#tab/rest-api)
-
-| API | Description |
-| | |
-| [Create or Update Failover Group](/rest/api/sql/failovergroups/createorupdate) | Creates or updates a failover group |
-| [Delete Failover Group](/rest/api/sql/failovergroups/delete) | Removes a failover group from the server |
-| [Failover (Planned)](/rest/api/sql/failovergroups/failover) | Triggers failover from the current primary server to the secondary server with full data synchronization.|
-| [Force Failover Allow Data Loss](/rest/api/sql/failovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary server to the secondary server without synchronizing data. This operation may result in data loss. |
-| [Get Failover Group](/rest/api/sql/failovergroups/get) | Retrieves a failover group's configuration. |
-| [List Failover Groups By Server](/rest/api/sql/failovergroups/listbyserver) | Lists the failover groups on a server. |
-| [Update Failover Group](/rest/api/sql/failovergroups/update) | Updates a failover group's configuration. |
---
-### <a name="manage-sql-managed-instance-failover"></a> Manage SQL Managed Instance geo-failover
-
-# [PowerShell](#tab/azure-powershell)
-
-| Cmdlet | Description |
-| | |
-| [New-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/new-azsqldatabaseinstancefailovergroup) |This command creates a failover group and registers it on both primary and secondary instances|
-| [Set-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/set-azsqldatabaseinstancefailovergroup) |Modifies configuration of a failover group|
-| [Get-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/get-azsqldatabaseinstancefailovergroup) |Retrieves a failover group's configuration|
-| [Switch-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/switch-azsqldatabaseinstancefailovergroup) |Triggers failover of a failover group to the secondary instance|
-| [Remove-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/remove-azsqldatabaseinstancefailovergroup) | Removes a failover group|
--
-# [Azure CLI](#tab/azure-cli)
-
-| Command | Description |
-| | |
-| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
-| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
-| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
-| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
-| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
-
-# [REST API](#tab/rest-api)
-
-| API | Description |
-| | |
-| [Create or Update Failover Group](/rest/api/sql/instancefailovergroups/createorupdate) | Creates or updates a failover group's configuration |
-| [Delete Failover Group](/rest/api/sql/instancefailovergroups/delete) | Removes a failover group from the instance |
-| [Failover (Planned)](/rest/api/sql/instancefailovergroups/failover) | Triggers failover from the current primary instance to this instance with full data synchronization. |
-| [Force Failover Allow Data Loss](/rest/api/sql/instancefailovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary instance to the secondary instance without synchronizing data. This operation may result in data loss. |
-| [Get Failover Group](/rest/api/sql/instancefailovergroups/get) | retrieves a failover group's configuration. |
-| [List Failover Groups - List By Location](/rest/api/sql/instancefailovergroups/listbylocation) | Lists the failover groups in a location. |
---
-## Next steps
--- For detailed tutorials, see
- - [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md)
- - [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
- - [Add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
-- For sample scripts, see:
- - [Use PowerShell to configure active geo-replication for Azure SQL Database](scripts/setup-geodr-and-failover-database-powershell.md)
- - [Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database](scripts/setup-geodr-and-failover-elastic-pool-powershell.md)
- - [Use PowerShell to add an Azure SQL Database to a failover group](scripts/add-database-to-failover-group-powershell.md)
-- For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md)-- To learn about Azure SQL Database automated backups, see [SQL Database automated backups](automated-backups-overview.md).-- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](recovery-using-backups.md).-- To learn about authentication requirements for a new primary server and database, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
azure-sql Auto Failover Group Sql Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/auto-failover-group-sql-db.md
+
+ Title: Auto-failover groups overview & best practices
+description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of a group of databases on a server for both single and pooled database in Azure SQL Database.
++++++++ Last updated : 03/01/2022++
+# Auto-failover groups overview & best practices (Azure SQL Database)
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](auto-failover-group-sql-db.md)
+> * [Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md)
+
+The auto-failover groups feature allows you to manage the replication and failover of some or all databases on a [logical server](logical-servers.md) to another region. This article focuses on using the Auto-failover group feature with Azure SQL Database and some best practices.
+
+To get started, review [Configure auto-failover group](auto-failover-group-configure-sql-db.md). For an end-to-end experience, see the [Auto-failover group tutorial](failover-group-add-single-database-tutorial.md).
++
+> [!NOTE]
+> - This article covers auto-failover groups for Azure SQL Database. For Azure SQL Managed Instance, see [Auto-failover groups in Azure SQL Managed Instance](../managed-instance/auto-failover-group-sql-mi.md).
+> - Auto-failover groups support geo-replication of all databases in the group to only one secondary server in a different region. If you need to create multiple Azure SQL Database geo-secondary replicas (in the same or different regions) for the same primary replica, use [active geo-replication](active-geo-replication-overview.md).
+>
+
+## Overview
+++
+## <a name="terminology-and-capabilities"></a> Terminology and capabilities
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+-->
++
+- **Failover group (FOG)**
+
+ A failover group is a named group of databases managed by a single server that can fail over as a unit to another Azure region in case all or some primary databases become unavailable due to an outage in the primary region.
+
+ > [!IMPORTANT]
+ > The name of the failover group must be globally unique within the `.database.windows.net` domain.
+
+- **Servers**
+
+ Some or all of the user databases on a [logical server](logical-servers.md) can be placed in a failover group. Also, a server supports multiple failover groups on a single server.
+
+- **Primary**
+
+ The server that hosts the primary databases in the failover group.
+
+- **Secondary**
+
+ The server that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primary.
+
+- **Adding single databases to failover group**
+
+ You can put several single databases on the same server into the same failover group. If you add a single database to the failover group, it automatically creates a secondary database using the same edition and compute size on secondary server. You specified that server when the failover group was created. If you add a database that already has a secondary database in the secondary server, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary server.
+
+ > [!IMPORTANT]
+ > Make sure that the secondary server doesn't have a database with the same name unless it is an existing secondary database.
+
+- **Adding databases in elastic pool to failover group**
+
+ You can put all or several databases within an elastic pool into the same failover group. If the primary database is in an elastic pool, the secondary is automatically created in the elastic pool with the same name (secondary pool). You must ensure that the secondary server contains an elastic pool with the same exact name and enough free capacity to host the secondary databases that will be created by the failover group. If you add a database in the pool that already has a secondary database in the secondary pool, that geo-replication link is inherited by the group. When you add a database that already has a secondary database in a server that is not part of the failover group, a new secondary is created in the secondary pool.
+
+- **Failover group read-write listener**
+
+ A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.database.windows.net`.
+
+- **Failover group read-only listener**
+
+ A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a server, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.database.windows.net`.
+
+- **Multiple failover groups**
+
+ You can configure multiple failover groups for the same pair of servers to control the scope of geo-failovers. Each group fails over independently. If your tenant-per-database application is deployed in multiple regions and uses elastic pools, you can use this capability to mix primary and secondary databases in each pool. This way you may be able to reduce the impact of an outage to only some tenant databases.
++
+## Failover group architecture
+
+A failover group in Azure SQL Database can include one or multiple databases, typically used by the same application. When you are using auto-failover groups with automatic failover policy, an outage that impacts one or several of the databases in the group will result in an automatic geo-failover.
+
+The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.
+
+![Diagram shows a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.](./media/auto-failover-group-overview/auto-failover-group.png)
+
+When designing a service with business continuity in mind, follow the general guidelines and best practices outlined in this article. When configuring a failover group, ensure that authentication and network access on the secondary is set up to function correctly after geo-failover, when the geo-secondary becomes the new primary. For details, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md). For more information about designing solutions for disaster recovery, see [Designing Cloud Solutions for Disaster Recovery Using active geo-replication](designing-cloud-solutions-for-disaster-recovery.md).
+
+For information about using point-in-time restore with failover groups, see [Point in Time Recovery (PITR)](recovery-using-backups.md#point-in-time-restore).
++
+## Initial seeding
+
+When adding databases or elastic pools to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 500 GB an hour for SQL Database. Seeding is performed for all databases in parallel.
++
+## <a name="using-one-or-several-failover-groups-to-manage-failover-of-multiple-databases"></a> Use multiple failover groups to failover multiple databases
+
+One or many failover groups can be created between two servers in different regions (primary and secondary servers). Each group can include one or several databases that are recovered as a unit in case all or some primary databases become unavailable due to an outage in the primary region. Creating a failover group creates geo-secondary databases with the same service objective as the primary. If you add an existing geo-replication relationship to a failover group, make sure the geo-secondary is configured with the same service tier and compute size as the primary.
+
+## <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener (primary)
+
+For read-write workloads, use `<fog-name>.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the primary. This name does not change after failover. Note the failover involves updating the DNS record so the client connections are redirected to the new primary only after the client DNS cache is refreshed. The time to live (TTL) of the primary and secondary listener DNS record is 30 seconds.
+
+## <a name="using-read-only-listener-for-read-only-workload"></a> Use the read-only listener (secondary)
+
+If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. For read-only sessions, use `<fog-name>.secondary.database.windows.net` as the server name in the connection string. Connections will be automatically directed to the geo-secondary. It is also recommended that you indicate read intent in the connection string by using `ApplicationIntent=ReadOnly`.
+
+In Premium, Business Critical, and Hyperscale service tiers, SQL Database supports the use of [read-only replicas](read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location:
+- To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.database.windows.net`.
+
+## <a name="preparing-for-performance-degradation"></a> Potential performance degradation after failover
+
+A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary (DR) region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the DR region, follow these [network security guidelines](#failover-groups-and-network-security), and orchestrate the geo-failover of relevant application components together with the database.
+
+## <a name="preparing-for-data-loss"></a> Potential data loss after failover
+
+If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. If the automatic failover policy is configured, the system waits for the period you specified by `GracePeriodWithDataLossHours` before initiating an automatic geo-failover. The default value is 1 hour. This favors database availability over no data loss. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
+
+> [!IMPORTANT]
+> Elastic pools with 800 or fewer DTUs or 8 or fewer vCores, and more than 250 databases may encounter issues including longer planned geo-failovers and degraded performance. These issues are more likely to occur for write intensive workloads, when geo-replicas are widely separated by geography, or when multiple secondary geo-replicas are used for each database. A symptom of these issues is an increase in geo-replication lag over time, potentially leading to a more extensive data loss in an outage. This lag can be monitored using [sys.dm_geo_replication_link_status](/sql/relational-databases/system-dynamic-management-views/sys-dm-geo-replication-link-status-azure-sql-database). If these issues occur, then mitigation includes scaling up the pool to have more DTUs or vCores, or reducing the number of geo-replicated databases in the pool.
++
+## Failover groups and network security
+
+For some applications the security rules require that the network access to the data tier is restricted to a specific component or components such as a VM, web service, etc. This requirement presents some challenges for business continuity design and the use of failover groups. Consider the following options when implementing such restricted access.
+
+### <a name="using-failover-groups-and-virtual-network-rules"></a> Use failover groups and virtual network service endpoints
+
+If you are using [Virtual Network service endpoints and rules](vnet-service-endpoint-rule-overview.md) to restrict access to your database in SQL Database, be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. Therefore, only the client applications deployed in the same region can connect to the primary database. Since a geo-failover results in the SQL Database client sessions being rerouted to a server in a different (secondary) region, these sessions will fail if originated from a client outside of that region. For that reason, the automatic failover policy cannot be enabled if the participating servers or instances are included in the Virtual Network rules. To support manual failover, follow these steps:
+
+1. Provision the redundant copies of the front-end components of your application (web service, virtual machines etc.) in the secondary region.
+2. Configure the [virtual network rules](vnet-service-endpoint-rule-overview.md) individually for primary and secondary server.
+3. Enable the [front-end failover using a Traffic manager configuration](designing-cloud-solutions-for-disaster-recovery.md#scenario-1-using-two-azure-regions-for-business-continuity-with-minimal-downtime).
+4. Initiate manual geo-failover when the outage is detected. This option is optimized for the applications that require consistent latency between the front-end and the data tier and supports recovery when either front end, data tier or both are impacted by the outage.
+
+> [!NOTE]
+> If you are using the **read-only listener** to load-balance a read-only workload, make sure that this workload is executed in a VM or other resource in the secondary region so it can connect to the secondary database.
+
+### Use failover groups and firewall rules
+
+If your business continuity plan requires failover using groups with automatic failover, you can restrict access to your database in SQL Database by using public IP firewall rules. To support automatic failover, follow these steps:
+
+1. [Create a public IP](../../virtual-network/ip-services/virtual-network-public-ip-address.md#create-a-public-ip-address).
+2. [Create a public load balancer](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) and assign the public IP to it.
+3. [Create a virtual network and the virtual machines](../../load-balancer/quickstart-load-balancer-standard-public-portal.md) for your front-end components.
+4. [Create network security group](../../virtual-network/network-security-groups-overview.md) and configure inbound connections.
+5. Ensure that the outbound connections are open to Azure SQL Database in a region by using an `Sql.<Region>` [service tag](../../virtual-network/network-security-groups-overview.md#service-tags).
+6. Create a [SQL Database firewall rule](firewall-configure.md) to allow inbound traffic from the public IP address you create in step 1.
+
+For more information on how to configure outbound access and what IP to use in the firewall rules, see [Load balancer outbound connections](../../load-balancer/load-balancer-outbound-connections.md).
+
+The above configuration will ensure that an automatic geo-failover will not block connections from the front-end components and assumes that the application can tolerate the longer latency between the front end and the data tier.
+
+> [!IMPORTANT]
+> To guarantee business continuity during regional outages you must ensure geographic redundancy for both front-end components and databases.
+
+## <a name="upgrading-or-downgrading-primary-database"></a> Scale primary database
+
+You can scale up or scale down the primary database to a different compute size (within the same service tier) without disconnecting any geo-secondaries. When scaling up, we recommend that you scale up the geo-secondary first, and then scale up the primary. When scaling down, reverse the order: scale down the primary first, and then scale down the secondary. When you scale a database to a different service tier, this recommendation is enforced.
+
+This sequence is recommended specifically to avoid the problem where the geo-secondary at a lower SKU gets overloaded and must be re-seeded during an upgrade or downgrade process. You could also avoid the problem by making the primary read-only, at the expense of impacting all read-write workloads against the primary.
+
+> [!NOTE]
+> If you created a geo-secondary as part of the failover group configuration it is not recommended to scale down the geo-secondary. This is to ensure your data tier has sufficient capacity to process your regular workload after a geo-failover.
+
+## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
+
+<!--
+There is some overlap in the following content, be sure to update all that's necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+-->
+
+Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
+
+> [!NOTE]
+> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
++
+## Permissions
+
+<!--
+There is some overlap of content in the following three articles, be sure to make changes in all three places if necessary
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+
+For specific permission scopes, review how to [configure auto-failover groups in Azure SQL Database](auto-failover-group-sql-db.md#permissions).
+
+## Limitations
+
+Be aware of the following limitations:
+
+- Failover groups cannot be created between two servers in the same Azure region.
+- Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.
+- Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.
+
+## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
+
+As discussed previously, auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs require the use of resource groups and support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
++
+# [PowerShell](#tab/azure-powershell)
+
+| Cmdlet | Description |
+| | |
+| [New-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/new-azsqldatabasefailovergroup) |This command creates a failover group and registers it on both primary and secondary servers|
+| [Remove-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/remove-azsqldatabasefailovergroup) | Removes a failover group from the server |
+| [Get-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/get-azsqldatabasefailovergroup) | Retrieves a failover group's configuration |
+| [Set-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/set-azsqldatabasefailovergroup) |Modifies configuration of a failover group |
+| [Switch-AzSqlDatabaseFailoverGroup](/powershell/module/az.sql/switch-azsqldatabasefailovergroup) | Triggers failover of a failover group to the secondary server |
+| [Add-AzSqlDatabaseToFailoverGroup](/powershell/module/az.sql/add-azsqldatabasetofailovergroup)|Adds one or more databases to a failover group|
+
+# [Azure CLI](#tab/azure-cli)
+
+| Command | Description |
+| | |
+| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
+| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
+| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
+| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
+
+# [REST API](#tab/rest-api)
+
+| API | Description |
+| | |
+| [Create or Update Failover Group](/rest/api/sql/failovergroups/createorupdate) | Creates or updates a failover group |
+| [Delete Failover Group](/rest/api/sql/failovergroups/delete) | Removes a failover group from the server |
+| [Failover (Planned)](/rest/api/sql/failovergroups/failover) | Triggers failover from the current primary server to the secondary server with full data synchronization.|
+| [Force Failover Allow Data Loss](/rest/api/sql/failovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary server to the secondary server without synchronizing data. This operation may result in data loss. |
+| [Get Failover Group](/rest/api/sql/failovergroups/get) | Retrieves a failover group's configuration. |
+| [List Failover Groups By Server](/rest/api/sql/failovergroups/listbyserver) | Lists the failover groups on a server. |
+| [Update Failover Group](/rest/api/sql/failovergroups/update) | Updates a failover group's configuration. |
+++++
+## Next steps
+
+- For detailed tutorials, see
+ - [Add SQL Database to a failover group](failover-group-add-single-database-tutorial.md)
+ - [Add an elastic pool to a failover group](failover-group-add-elastic-pool-tutorial.md)
+- For sample scripts, see:
+ - [Use PowerShell to configure active geo-replication for Azure SQL Database](scripts/setup-geodr-and-failover-database-powershell.md)
+ - [Use PowerShell to configure active geo-replication for a pooled database in Azure SQL Database](scripts/setup-geodr-and-failover-elastic-pool-powershell.md)
+ - [Use PowerShell to add an Azure SQL Database to a failover group](scripts/add-database-to-failover-group-powershell.md)
+- For a business continuity overview and scenarios, see [Business continuity overview](business-continuity-high-availability-disaster-recover-hadr-overview.md)
+- To learn about Azure SQL Database automated backups, see [SQL Database automated backups](automated-backups-overview.md).
+- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](recovery-using-backups.md).
+- To learn about authentication requirements for a new primary server and database, see [SQL Database security after disaster recovery](active-geo-replication-security-configure.md).
azure-sql Failover Group Add Elastic Pool Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-elastic-pool-tutorial.md
Last updated 01/26/2022
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-Configure a failover group for an Azure SQL Database elastic pool and test failover using the Azure portal. In this tutorial, you'll learn how to:
+> [!div class="op_single_selector"]
+> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
++
+Configure an [auto-failover group](auto-failover-group-sql-db.md) for an Azure SQL Database elastic pool and test failover using the Azure portal.
+
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > > - Create a single database. > - Add the database to an elastic pool.
-> - Create a [failover group](auto-failover-group-overview.md) for two elastic pools between two servers.
+> - Create a failover group for two elastic pools between two servers.
> - Test failover. ## Prerequisites
azure-sql Failover Group Add Single Database Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/failover-group-add-single-database-tutorial.md
Title: "Tutorial: Add a database to a failover group"
-description: Add a database in Azure SQL Database to an autofailover group using the Azure portal, PowerShell, or the Azure CLI.
+description: Add a database in Azure SQL Database to an auto-failover group using the Azure portal, PowerShell, or the Azure CLI.
Last updated 01/26/2022
-# Tutorial: Add an Azure SQL Database to an autofailover group
-
+# Tutorial: Add an Azure SQL Database to an auto-failover group
[!INCLUDE[appliesto-sqldb](../includes/appliesto-sqldb.md)]
-A [failover group](auto-failover-group-overview.md) is a declarative abstraction layer that allows you to group multiple geo-replicated databases. Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal, PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
+> [!div class="op_single_selector"]
+> * [Azure SQL Database (single database)](failover-group-add-single-database-tutorial.md)
+> * [Azure SQL Database (elastic pool)](failover-group-add-elastic-pool-tutorial.md)
+> * [Azure SQL Managed Instance](../managed-instance/failover-group-add-instance-tutorial.md)
++
+A [failover group](auto-failover-group-sql-db.md) is a declarative abstraction layer that allows you to group multiple geo-replicated databases. Learn to configure a failover group for an Azure SQL Database and test failover using either the Azure portal, PowerShell, or the Azure CLI. In this tutorial, you'll learn how to:
> [!div class="checklist"] >
azure-sql High Availability Sla https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/high-availability-sla.md
A failover can be initiated using PowerShell, REST API, or Azure CLI:
|:|:|:|:| |Database|[Invoke-AzSqlDatabaseFailover](/powershell/module/az.sql/invoke-azsqldatabasefailover)|[Database failover](/rest/api/sql/databases/failover)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI| |Elastic pool|[Invoke-AzSqlElasticPoolFailover](/powershell/module/az.sql/invoke-azsqlelasticpoolfailover)|[Elastic pool failover](/javascript/api/@azure/arm-sql/elasticpools)|[az rest](/cli/azure/reference-index#az-rest) may be used to invoke a REST API call from Azure CLI|
-|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover)|
+|Managed Instance|[Invoke-AzSqlInstanceFailover](/powershell/module/az.sql/Invoke-AzSqlInstanceFailover/)|[Managed Instances - Failover](/rest/api/sql/managed%20instances%20-%20failover/failover)|[az sql mi failover](/cli/azure/sql/mi/#az-sql-mi-failover) may be used to invoke a REST API call from Azure CLI|
> [!IMPORTANT] > The Failover command is not available for readable secondary replicas of Hyperscale databases.
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/how-to-content-reference-guide.md
In this article you can find a content reference of various guides, scripts, and
- [Configure Conditional Access](conditional-access-configure.md) - [Multi-factor Azure AD auth](authentication-mfa-ssms-overview.md) - [Configure Multi-Factor Authentication](authentication-mfa-ssms-configure.md)
+- [Configure backup retention](long-term-backup-retention-configure.md) for a database to keep your backups on Azure Blob Storage.
+- [Configure geo-replication](active-geo-replication-overview.md) to keep a replica of your database in another region.
+- [Configure auto-failover group](auto-failover-group-configure-sql-db.md) to automatically failover a group of single or pooled databases to a secondary server in another region in the event of a disaster.
- [Configure temporal retention policy](temporal-tables-retention-policy.md) - [Configure TDE with BYOK](transparent-data-encryption-byok-configure.md) - [Rotate TDE BYOK keys](transparent-data-encryption-byok-key-rotation.md)
In this article you can find a content reference of various guides, scripts, and
- [Configure transactional replication](replication-to-sql-database.md) to replicate your date between databases. - [Configure threat detection](threat-detection-configure.md) to let Azure SQL Database identify suspicious activities such as SQL Injection or access from suspicious locations. - [Configure dynamic data masking](dynamic-data-masking-configure-portal.md) to protect your sensitive data.-- [Configure backup retention](long-term-backup-retention-configure.md) for a database to keep your backups on Azure Blob Storage. -- [Configure geo-replication](active-geo-replication-overview.md) to keep a replica of your database in another region. - [Configure security for geo-replicas](active-geo-replication-security-configure.md). ## Monitor and tune your database
azure-sql Move Resources Across Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/database/move-resources-across-regions.md
This article provides a general workflow for moving resources to a different reg
1. Create a [failover group](failover-group-add-single-database-tutorial.md#2create-the-failover-group) between the server of the source and the server of the target. 1. Add the databases you want to move to the failover group.
- Replication of all added databases will be initiated automatically. For more information, see [Best practices for using failover groups with single databases](auto-failover-group-overview.md#best-practices-for-sql-database).
+ Replication of all added databases will be initiated automatically. For more information, see [Using failover groups with SQL Database](auto-failover-group-sql-db.md).
### Monitor the preparation process
Once the move completes, remove the resources in the source region to avoid unne
1. Create a separate [failover group](failover-group-add-elastic-pool-tutorial.md#3create-the-failover-group) between each elastic pool on the source server and its counterpart elastic pool on the target server. 1. Add all the databases in the pool to the failover group.
- Replication of the added databases will be initiated automatically. For more information, see [Best practices for failover groups with elastic pools](auto-failover-group-overview.md#best-practices-for-sql-database).
+ Replication of the added databases will be initiated automatically. For more information, see [Using failover groups with SQL Database](auto-failover-group-sql-db.md).
> [!NOTE] > While it is possible to create a failover group that includes multiple elastic pools, we strongly recommend that you create a separate failover group for each pool. If you have a large number of databases across multiple elastic pools that you need to move, you can run the preparation steps in parallel and then initiate the move step in parallel. This process will scale better and will take less time compared to having multiple elastic pools in the same failover group.
azure-sql Auto Failover Group Configure Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+
+ Title: Configure an auto-failover group
+description: Learn how to configure an auto-failover group for Azure SQL Managed Instance by using the Azure portal, and Azure PowerShell.
+++++
+ms.devlang:
+++ Last updated : 03/01/2022+
+# Configure an auto-failover group for Azure SQL Managed Instance
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/auto-failover-group-configure-sql-db.md)
+> * [Azure SQL Managed Instance](auto-failover-group-configure-sql-mi.md)
+
+This topic teaches you how to configure an [auto-failover group](auto-failover-group-sql-mi.md) for Azure SQL Managed Instance using the Azure portal and Azure PowerShell. For an end-to-end experience, review the [Auto-failover group tutorial](failover-group-add-instance-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see [Configure auto-failover groups in SQL Database](../database/auto-failover-group-configure-sql-db.md).
++
+## Prerequisites
+
+Consider the following prerequisites:
+
+- The secondary managed instance must be empty.
+- The subnet range for the secondary virtual network must not overlap the subnet range of the primary virtual network.
+- The collation and time zone of the secondary managed instance must match that of the primary managed instance.
+- When connecting the two gateways, the **Shared Key** should be the same for both connections.
+- You will need to either configure [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md) or create a gateway for the virtual network of each SQL Managed Instance, connect the two gateways, and then create the failover group.
+- Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. Managed instances residing in geo-paired regions have much better performance compared to unpaired regions.
+
+## Create primary virtual network gateway
+
+If you have not configured [ExpressRoute](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md), you can create the primary virtual network gateway with the Azure portal, or PowerShell.
+
+> [!NOTE]
+> The SKU of the gateway affects throughput performance. This article deploys a gateway with the most basic SKU (`HwGw1`). Deploy a higher SKU (example: `VpnGw3`) to achieve higher throughput. For all available options, see [Gateway SKUs](../../vpn-gateway/vpn-gateway-about-vpngateways.md#benchmark)
+
+# [Portal](#tab/azure-portal)
+
+Create the primary virtual network gateway using the Azure portal.
+
+1. In the [Azure portal](https://portal.azure.com), go to your resource group and select the **Virtual network** resource for your primary managed instance.
+1. Select **Subnets** under **Settings** and then select to add a new **Gateway subnet**. Leave the default values.
+
+ ![Add gateway for primary managed instance](./media/auto-failover-group-configure-sql-mi/add-subnet-gateway-primary-vnet.png)
+
+1. Once the subnet gateway is created, select **Create a resource** from the left navigation pane and then type `Virtual network gateway` in the search box. Select the **Virtual network gateway** resource published by **Microsoft**.
+
+ ![Create a new virtual network gateway](./media/auto-failover-group-configure-sql-mi/create-virtual-network-gateway.png)
+
+1. Fill out the required fields to configure the gateway your primary managed instance.
+
+ The following table shows the values necessary for the gateway for the primary managed instance:
+
+ | **Field** | Value |
+ | | |
+ | **Subscription** | The subscription where your primary managed instance is. |
+ | **Name** | The name for your virtual network gateway. |
+ | **Region** | The region where your primary managed instance is. |
+ | **Gateway type** | Select **VPN**. |
+ | **VPN Type** | Select **Route-based** |
+ | **SKU**| Leave default of `VpnGw1`. |
+ | **Location**| The location where your secondary managed instance and secondary virtual network is. |
+ | **Virtual network**| Select the virtual network for your secondary managed instance. |
+ | **Public IP address**| Select **Create new**. |
+ | **Public IP address name**| Enter a name for your IP address. |
+ | &nbsp; | &nbsp; |
+
+1. Leave the other values as default, and then select **Review + create** to review the settings for your virtual network gateway.
+
+ ![Primary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-primary-gateway.png)
+
+1. Select **Create** to create your new virtual network gateway.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the primary virtual network gateway using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $primaryVnetName = "<Primary-Virtual-Network-Name>"
+ $primaryGWName = "<Primary-Gateway-Name>"
+ $primaryGWPublicIPAddress = $primaryGWName + "-ip"
+ $primaryGWIPConfig = $primaryGWName + "-ipc"
+ $primaryGWAsn = 61000
+
+ # Get the primary virtual network
+ $vnet1 = Get-AzVirtualNetwork -Name $primaryVnetName -ResourceGroupName $primaryResourceGroupName
+ $primaryLocation = $vnet1.Location
+
+ # Create primary gateway
+ Write-host "Creating primary gateway..."
+ $subnet1 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet1
+ $gwpip1= New-AzPublicIpAddress -Name $primaryGWPublicIPAddress -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -AllocationMethod Dynamic
+ $gwipconfig1 = New-AzVirtualNetworkGatewayIpConfig -Name $primaryGWIPConfig `
+ -SubnetId $subnet1.Id -PublicIpAddressId $gwpip1.Id
+
+ $gw1 = New-AzVirtualNetworkGateway -Name $primaryGWName -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -IpConfigurations $gwipconfig1 -GatewayType Vpn `
+ -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $primaryGWAsn
+ $gw1
+ ```
+++
+## Create secondary virtual network gateway
+
+Create the secondary virtual network gateway using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Repeat the steps in the previous section to create the virtual network subnet and gateway for the secondary managed instance. Fill out the required fields to configure the gateway for your secondary managed instance.
+
+The following table shows the values necessary for the gateway for the secondary managed instance:
+
+ | **Field** | Value |
+ | | |
+ | **Subscription** | The subscription where your secondary managed instance is. |
+ | **Name** | The name for your virtual network gateway, such as `secondary-mi-gateway`. |
+ | **Region** | The region where your secondary managed instance is. |
+ | **Gateway type** | Select **VPN**. |
+ | **VPN Type** | Select **Route-based** |
+ | **SKU**| Leave default of `VpnGw1`. |
+ | **Location**| The location where your secondary managed instance and secondary virtual network is. |
+ | **Virtual network**| Select the virtual network that was created in section 2, such as `vnet-sql-mi-secondary`. |
+ | **Public IP address**| Select **Create new**. |
+ | **Public IP address name**| Enter a name for your IP address, such as `secondary-gateway-IP`. |
+ | &nbsp; | &nbsp; |
+
+ ![Secondary gateway settings](./media/auto-failover-group-configure-sql-mi/settings-for-secondary-gateway.png)
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the secondary virtual network gateway using PowerShell.
+
+ ```powershell-interactive
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $secondaryVnetName = "<Secondary-Virtual-Network-Name>"
+ $secondaryGWName = "<Secondary-Gateway-Name>"
+ $secondaryGWPublicIPAddress = $secondaryGWName + "-IP"
+ $secondaryGWIPConfig = $secondaryGWName + "-ipc"
+ $secondaryGWAsn = 62000
+
+ # Get the secondary virtual network
+ $vnet2 = Get-AzVirtualNetwork -Name $secondaryVnetName -ResourceGroupName $secondaryResourceGroupName
+ $secondaryLocation = $vnet2.Location
+
+ # Create the secondary gateway
+ Write-host "Creating secondary gateway..."
+ $subnet2 = Get-AzVirtualNetworkSubnetConfig -Name GatewaySubnet -VirtualNetwork $vnet2
+ $gwpip2= New-AzPublicIpAddress -Name $secondaryGWPublicIPAddress -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -AllocationMethod Dynamic
+ $gwipconfig2 = New-AzVirtualNetworkGatewayIpConfig -Name $secondaryGWIPConfig `
+ -SubnetId $subnet2.Id -PublicIpAddressId $gwpip2.Id
+
+ $gw2 = New-AzVirtualNetworkGateway -Name $secondaryGWName -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -IpConfigurations $gwipconfig2 -GatewayType Vpn `
+ -VpnType RouteBased -GatewaySku VpnGw1 -EnableBgp $true -Asn $secondaryGWAsn
+
+ $gw2
+ ```
+++
+## Connect the gateways
+
+Create connections between the two gateways using the Azure portal or PowerShell.
+
+Two connections need to be created - the connection from the primary gateway to the secondary gateway, and then the connection from the secondary gateway to the primary gateway.
+
+The shared key used for both connections should be the same for each connection.
+
+# [Portal](#tab/azure-portal)
+
+Create connections between the two gateways using the Azure portal.
+
+1. Select **Create a resource** from the [Azure portal](https://portal.azure.com).
+1. Type `connection` in the search box and then press enter to search, which takes you to the **Connection** resource, published by Microsoft.
+1. Select **Create** to create your connection.
+1. On the **Basics** tab, select the following values and then select **OK**.
+ 1. Select `VNet-to-VNet` for the **Connection type**.
+ 1. Select your subscription from the drop-down.
+ 1. Select the resource group for your managed instance in the drop-down.
+ 1. Select the location of your primary managed instance from the drop-down.
+1. On the **Settings** tab, select or enter the following values and then select **OK**:
+ 1. Choose the primary network gateway for the **First virtual network gateway**, such as `Primary-Gateway`.
+ 1. Choose the secondary network gateway for the **Second virtual network gateway**, such as `Secondary-Gateway`.
+ 1. Select the checkbox next to **Establish bidirectional connectivity**.
+ 1. Either leave the default primary connection name, or rename it to a value of your choice.
+ 1. Provide a **Shared key (PSK)** for the connection, such as `mi1m2psk`.
+
+ ![Create gateway connection](./media/auto-failover-group-configure-sql-mi/create-gateway-connection.png)
+
+1. On the **Summary** tab, review the settings for your bidirectional connection and then select **OK** to create your connection.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create connections between the two gateways using PowerShell.
+
+ ```powershell-interactive
+ $vpnSharedKey = "mi1mi2psk"
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $primaryGWConnection = "<Primary-connection-name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $secondaryGWConnection = "<Secondary-connection-name>"
+ $secondaryLocation = "<Secondary-Region>"
+
+ # Connect the primary to secondary gateway
+ Write-host "Connecting the primary gateway"
+ New-AzVirtualNetworkGatewayConnection -Name $primaryGWConnection -ResourceGroupName $primaryResourceGroupName `
+ -VirtualNetworkGateway1 $gw1 -VirtualNetworkGateway2 $gw2 -Location $primaryLocation `
+ -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
+ $primaryGWConnection
+
+ # Connect the secondary to primary gateway
+ Write-host "Connecting the secondary gateway"
+
+ New-AzVirtualNetworkGatewayConnection -Name $secondaryGWConnection -ResourceGroupName $secondaryResourceGroupName `
+ -VirtualNetworkGateway1 $gw2 -VirtualNetworkGateway2 $gw1 -Location $secondaryLocation `
+ -ConnectionType Vnet2Vnet -SharedKey $vpnSharedKey -EnableBgp $true
+ $secondaryGWConnection
+ ```
+++
+## Create the failover group
+
+Create the failover group for your managed instances by using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Create the failover group for your SQL Managed Instances by using the Azure portal.
+
+1. Select **Azure SQL** in the left-hand menu of the [Azure portal](https://portal.azure.com). If **Azure SQL** is not in the list, select **All services**, then type Azure SQL in the search box. (Optional) Select the star next to **Azure SQL** to favorite it and add it as an item in the left-hand navigation.
+1. Select the primary managed instance you want to add to the failover group.
+1. Under **Settings**, navigate to **Instance Failover Groups** and then choose to **Add group** to open the **Instance Failover Group** page.
+
+ ![Add a failover group](./media/auto-failover-group-configure-sql-mi/add-failover-group.png)
+
+1. On the **Instance Failover Group** page, type the name of your failover group and then choose the secondary managed instance from the drop-down. Select **Create** to create your failover group.
+
+ ![Create failover group](./media/auto-failover-group-configure-sql-mi/create-failover-group.png)
+
+1. Once failover group deployment is complete, you will be taken back to the **Failover group** page.
+
+# [PowerShell](#tab/azure-powershell)
+
+Create the failover group for your managed instances using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $failoverGroupName = "<Failover-Group-Name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryLocation = "<Secondary-Region>"
+ $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
+ $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
+
+ # Create failover group
+ Write-host "Creating the failover group..."
+ $failoverGroup = New-AzSqlDatabaseInstanceFailoverGroup -Name $failoverGroupName `
+ -Location $primaryLocation -ResourceGroupName $primaryResourceGroupName -PrimaryManagedInstanceName $primaryManagedInstance `
+ -PartnerRegion $secondaryLocation -PartnerManagedInstanceName $secondaryManagedInstance `
+ -FailoverPolicy Automatic -GracePeriodWithDataLossHours 1
+ $failoverGroup
+ ```
+++
+## Test failover
+
+Test failover of your failover group using the Azure portal or PowerShell.
+
+# [Portal](#tab/azure-portal)
+
+Test failover of your failover group using the Azure portal.
+
+1. Navigate to your _secondary_ managed instance within the [Azure portal](https://portal.azure.com) and select **Instance Failover Groups** under settings.
+1. Review which managed instance is the primary, and which managed instance is the secondary.
+1. Select **Failover** and then select **Yes** on the warning about TDS sessions being disconnected.
+
+ ![Fail over the failover group](./media/auto-failover-group-configure-sql-mi/failover-mi-failover-group.png)
+
+1. Review which manged instance is the primary and which instance is the secondary. If failover succeeded, the two instances should have switched roles.
+
+ ![Managed instances have switched roles after failover](./media/auto-failover-group-configure-sql-mi/mi-switched-after-failover.png)
+
+1. Go to the new _secondary_ managed instance and select **Failover** once again to fail the primary instance back to the primary role.
+
+# [PowerShell](#tab/azure-powershell)
+
+Test failover of your failover group using PowerShell.
+
+ ```powershell-interactive
+ $primaryResourceGroupName = "<Primary-Resource-Group>"
+ $secondaryResourceGroupName = "<Secondary-Resource-Group>"
+ $failoverGroupName = "<Failover-Group-Name>"
+ $primaryLocation = "<Primary-Region>"
+ $secondaryLocation = "<Secondary-Region>"
+ $primaryManagedInstance = "<Primary-Managed-Instance-Name>"
+ $secondaryManagedInstance = "<Secondary-Managed-Instance-Name>"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+
+ # Failover the primary managed instance to the secondary role
+ Write-host "Failing primary over to the secondary location"
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $secondaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
+ Write-host "Successfully failed failover group to secondary location"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+
+ # Fail primary managed instance back to primary role
+ Write-host "Failing primary back to primary role"
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $primaryLocation -Name $failoverGroupName | Switch-AzSqlDatabaseInstanceFailoverGroup
+ Write-host "Successfully failed failover group to primary location"
+
+ # Verify the current primary role
+ Get-AzSqlDatabaseInstanceFailoverGroup -ResourceGroupName $primaryResourceGroupName `
+ -Location $secondaryLocation -Name $failoverGroupName
+ ```
+++++
+## Locate listener endpoint
+
+Once your failover group is configured, update the connection string for your application to the listener endpoint. This will keep your application connected to the failover group listener, rather than the primary database, elastic pool, or instance database. That way, you don't have to manually update the connection string every time your database entity fails over, and traffic is routed to whichever entity is currently primary.
+
+The listener endpoint is in the form of `fog-name.database.windows.net`, and is visible in the Azure portal, when viewing the failover group:
+
+![Failover group connection string](./media/auto-failover-group-configure-sql-mi/find-failover-group-connection-string.png)
+
+## <a name="creating-a-failover-group-between-managed-instances-in-different-subscriptions"></a> Create group between instances in different subscriptions
+
+You can create a failover group between SQL Managed Instances in two different subscriptions, as long as subscriptions are associated to the same [Azure Active Directory Tenant](../../active-directory/fundamentals/active-directory-whatis.md#terminology). When using PowerShell API, you can do it by specifying the `PartnerSubscriptionId` parameter for the secondary SQL Managed Instance. When using REST API, each instance ID included in the `properties.managedInstancePairs` parameter can have its own Subscription ID.
+
+> [!IMPORTANT]
+> Azure portal does not support creation of failover groups across different subscriptions. Also, for the existing failover groups across different subscriptions and/or resource groups, failover cannot be initiated manually via portal from the primary SQL Managed Instance. Initiate it from the geo-secondary instance instead.
+
+## Change the secondary region
+
+Let's assume that instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new secondary instance in the third region. To make the transition, follow these steps:
+
+1. Create instance C with same size as A and in the same DNS zone.
+2. Delete the failover group between instances A and B. At this point the logins will be failing because the SQL aliases for the failover group listeners have been deleted and the gateway will not recognize the failover group name. The secondary databases will be disconnected from the primaries and will become read-write databases.
+3. Create a failover group with the same name between instance A and C. Follow the instructions in [failover group with SQL Managed Instance tutorial](failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized.
+4. Delete instance B if not needed to avoid unnecessary charges.
+
+> [!NOTE]
+> After step 2 and until step 3 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
+
+## Change the primary region
+
+Let's assume instance A is the primary instance, instance B is the existing secondary instance, and instance C is the new primary instance in the third region. To make the transition, follow these steps:
+
+1. Create instance C with same size as B and in the same DNS zone.
+2. Connect to instance B and manually failover to switch the primary instance to B. Instance A will become the new secondary instance automatically.
+3. Delete the failover group between instances A and B. At this point login attempts using failover group endpoints will be failing. The secondary databases on A will be disconnected from the primaries and will become read-write databases.
+4. Create a failover group with the same name between instance A and C. Follow the instructions in the [failover group with managed instance tutorial](failover-group-add-instance-tutorial.md). This is a size-of-data operation and will complete when all databases from instance A are seeded and synchronized. At this point login attempts will stop failing.
+5. Delete instance A if not needed to avoid unnecessary charges.
+
+> [!CAUTION]
+> After step 3 and until step 4 is completed the databases in instance A will remain unprotected from a catastrophic failure of instance A.
+
+> [!IMPORTANT]
+> When the failover group is deleted, the DNS records for the listener endpoints are also deleted. At that point, there is a non-zero probability of somebody else creating a failover group with the same name. Because failover group names must be globally unique, this will prevent you from using the same name again. To minimize this risk, don't use generic failover group names.
+
+## <a name="enabling-geo-replication-between-managed-instances-and-their-vnets"></a> Enabling geo-replication between MI virtual networks
+
+When you set up a failover group between primary and secondary SQL Managed Instances in two different regions, each instance is isolated using an independent virtual network. To allow replication traffic between these VNets ensure these prerequisites are met:
+
+- The two instances of SQL Managed Instance need to be in different Azure regions.
+- The two instances of SQL Managed Instance need to be the same service tier, and have the same storage size.
+- Your secondary instance of SQL Managed Instance must be empty (no user databases).
+- The virtual networks used by the instances of SQL Managed Instance need to be connected through a [VPN Gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md) or [Express Route](../../expressroute/expressroute-howto-circuit-portal-resource-manager.md). When two virtual networks connect through an on-premises network, ensure there is no firewall rule blocking ports 5022, and 11000-11999. Global VNet Peering is supported with the limitation described in the note below.
+
+ > [!IMPORTANT]
+ > [On 9/22/2020 support for global virtual network peering for newly created virtual clusters was announced](https://azure.microsoft.com/updates/global-virtual-network-peering-support-for-azure-sql-managed-instance-now-available/). It means that global virtual network peering is supported for SQL managed instances created in empty subnets after the announcement date, as well for all the subsequent managed instances created in those subnets. For all the other SQL managed instances peering support is limited to the networks in the same region due to the [constraints of global virtual network peering](../../virtual-network/virtual-network-manage-peering.md#requirements-and-constraints). See also the relevant section of the [Azure Virtual Networks frequently asked questions](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) article for more details. To be able to use global virtual network peering for SQL managed instances from virtual clusters created before the announcement date, consider configuring non-default [maintenance window](../database/maintenance-window.md) on the instances, as it will move the instances into new virtual clusters that support global virtual network peering.
+
+- The two SQL Managed Instance VNets cannot have overlapping IP addresses.
+- You need to set up your Network Security Groups (NSG) such that ports 5022 and the range 11000~12000 are open inbound and outbound for connections from the subnet of the other managed instance. This is to allow replication traffic between the instances.
+
+ > [!IMPORTANT]
+ > Misconfigured NSG security rules leads to stuck database seeding operations.
+
+- The secondary SQL Managed Instance is configured with the correct DNS zone ID. DNS zone is a property of a SQL Managed Instance and underlying virtual cluster, and its ID is included in the host name address. The zone ID is generated as a random string when the first SQL Managed Instance is created in each VNet and the same ID is assigned to all other instances in the same subnet. Once assigned, the DNS zone cannot be modified. SQL Managed Instances included in the same failover group must share the DNS zone. You accomplish this by passing the primary instance's zone ID as the value of DnsZonePartner parameter when creating the secondary instance.
+
+ > [!NOTE]
+ > For a detailed tutorial on configuring failover groups with SQL Managed Instance, see [add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md).
+
+## Permissions
++
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/auto-failover-group-overview.md
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor](../../role-based-access-control/built-in-roles.md#sql-server-contributor) role has all the necessary permissions to manage failover groups.
+
+The following table lists specific permission scopes for Azure SQL Managed Instance:
+
+| **Action** | **Permission** | **Scope**|
+| :- | :- | :- |
+|**Create failover group**| Azure RBAC write access | Primary managed instance </br> Secondary managed instance|
+| **Update failover group** Azure RBAC write access | Failover group </br> All databases within the managed instance|
+| **Fail over failover group** | Azure RBAC write access | Failover group on new primary managed instance |
+| | |
++
+## Next steps
+
+For detailed steps configuring a failover group, see the following tutorials:
+
+- [Add a single database to a failover group](../database/failover-group-add-single-database-tutorial.md)
+- [Add an elastic pool to a failover group](../database/failover-group-add-elastic-pool-tutorial.md)
+- [Add a managed instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
+
+For an overview of the feature, see [auto-failover groups](auto-failover-group-sql-mi.md).
azure-sql Auto Failover Group Sql Mi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+
+ Title: Auto-failover groups overview & best practices
+description: Auto-failover groups let you manage geo-replication and automatic / coordinated failover of all user databases on a managed instance in Azure SQL Managed Instance.
++++++++ Last updated : 03/01/2022++
+# Auto-failover groups overview & best practices (Azure SQL Managed Instance)
+
+> [!div class="op_single_selector"]
+> * [Azure SQL Database](../database/auto-failover-group-sql-db.md)
+> * [Azure SQL Managed Instance](auto-failover-group-sql-mi.md)
+
+The auto-failover groups feature allows you to manage the replication and failover of all user databases in a managed instance to another Azure region. This article focuses on using the Auto-failover group feature with Azure SQL Managed Instance and some best practices.
+
+To get started, review [Configure auto-failover group](auto-failover-group-configure-sql-mi.md). For an end-to-end experience, see the [Auto-failover group tutorial](failover-group-add-instance-tutorial.md).
+
+> [!NOTE]
+> This article covers auto-failover groups for Azure SQL Managed Instance. For Azure SQL Database, see [Auto-failover groups in SQL Database](../database/auto-failover-group-sql-db.md).
+
+## Overview
+++
+## <a name="terminology-and-capabilities"></a> Terminology and capabilities
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+- **Failover group (FOG)**
+
+ A failover group allows for all user databases within a managed instance to fail over as a unit to another Azure region in case the primary managed instance becomes unavailable due to a primary region outage. Since failover groups for SQL Managed Instance contain all user databases within the instance, only one failover group can be configured on an instance.
+
+ > [!IMPORTANT]
+ > The name of the failover group must be globally unique within the `.database.windows.net` domain.
+
+- **Primary**
+
+ The managed instance that hosts the primary databases in the failover group.
+
+- **Secondary**
+
+ The managed instance that hosts the secondary databases in the failover group. The secondary cannot be in the same Azure region as the primaryF.
+
+- **DNS zone**
+
+ A unique ID that is automatically generated when a new SQL Managed Instance is created. A multi-domain (SAN) certificate for this instance is provisioned to authenticate the client connections to any instance in the same DNS zone. The two managed instances in the same failover group must share the DNS zone.
+
+- **Failover group read-write listener**
+
+ A DNS CNAME record that points to the current primary. It is created automatically when the failover group is created and allows the read-write workload to transparently reconnect to the primary when the primary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.<zone_id>.database.windows.net`.
+
+- **Failover group read-only listener**
+
+ A DNS CNAME record that points to the current secondary. It is created automatically when the failover group is created and allows the read-only SQL workload to transparently connect to the secondary when the secondary changes after failover. When the failover group is created on a SQL Managed Instance, the DNS CNAME record for the listener URL is formed as `<fog-name>.secondary.<zone_id>.database.windows.net`.
+++
+## Failover group architecture
+
+The auto-failover group must be configured on the primary instance and will connect it to the secondary instance in a different Azure region. All user databases in the instance will be replicated to the secondary instance. System databases like _master_ and _msdb_ will not be replicated.
+
+The following diagram illustrates a typical configuration of a geo-redundant cloud application using managed instance and auto-failover group:
++
+If your application uses SQL Managed Instance as the data tier, follow the general guidelines and best practices outlined in this article when designing for business continuity.
++
+> [!IMPORTANT]
+> If you deploy auto-failover groups in a hub-and-spoke network topology cross-region, replication traffic should go directly between the two managed instance subnets rather than directed through the hub networks.
+
+## Initial seeding
+
+When adding managed instances to a failover group, there is an initial seeding phase before data replication starts. The initial seeding phase is the longest and most expensive operation. Once initial seeding completes, data is synchronized, and then only subsequent data changes are replicated. The time it takes for the initial seeding to complete depends on the size of your data, number of replicated databases, the load on primary databases, and the speed of the link between the primary and secondary. Under normal circumstances, possible seeding speed is up to 360 GB an hour for SQL Managed Instance. Seeding is performed for all databases in parallel.
+
+For SQL Managed Instance, consider the speed of the Express Route link between the two instances when estimating the time of the initial seeding phase. If the speed of the link between the two instances is slower than what is necessary, the time to seed is likely to be noticeably impacted. You can use the stated seeding speed, number of databases, total size of data, and the link speed to estimate how long the initial seeding phase will take before data replication starts. For example, for a single 100 GB database, the initial seed phase would take about 1.2 hours if the link is capable of pushing 84 GB per hour, and if there are no other databases being seeded. If the link can only transfer 10 GB per hour, then seeding a 100 GB database will take about 10 hours. If there are multiple databases to replicate, seeding will be executed in parallel, and, when combined with a slow link speed, the initial seeding phase may take considerably longer, especially if the parallel seeding of data from all databases exceeds the available link bandwidth. If the network bandwidth between two instances is limited and you are adding multiple managed instances to a failover group, consider adding multiple managed instances to the failover group sequentially, one by one. Given an appropriately sized gateway SKU between the two managed instances, and if corporate network bandwidth allows it, it's possible to achieve speeds as high as 360 GB an hour.
++
+## <a name="creating-the-secondary-instance"></a> Creating the geo-secondary instance
+
+To ensure non-interrupted connectivity to the primary SQL Managed Instance after failover, both the primary and secondary instances must be in the same DNS zone. It will guarantee that the same multi-domain (SAN) certificate can be used to authenticate client connections to either of the two instances in the failover group. When your application is ready for production deployment, create a secondary SQL Managed Instance in a different region and make sure it shares the DNS zone with the primary SQL Managed Instance. You can do it by specifying an optional parameter during creation. If you are using PowerShell or the REST API, the name of the optional parameter is `DNSZonePartner`. The name of the corresponding optional field in the Azure portal is *Primary Managed Instance*.
+
+> [!IMPORTANT]
+> The first managed instance created in the subnet determines DNS zone for all subsequent instances in the same subnet. This means that two instances from the same subnet cannot belong to different DNS zones.
+
+For more information about creating the secondary SQL Managed Instance in the same DNS zone as the primary instance, see [Create a secondary managed instance](../managed-instance/failover-group-add-instance-tutorial.md#create-a-secondary-managed-instance).
+
+## <a name="using-geo-paired-regions"></a> Use paired regions
+
+Deploy both managed instances to [paired regions](../../availability-zones/cross-region-replication-azure.md) for performance reasons. SQL Managed Instance failover groups in paired regions have better performance compared to unpaired regions.
+
+## <a name="enabling-replication-traffic-between-two-instances"></a> Enable geo-replication traffic between two instances
+
+Because each managed instance is isolated in its own VNet, two-directional traffic between these VNets must be allowed. See [Azure VPN gateway](../../vpn-gateway/vpn-gateway-about-vpngateways.md)
+++
+## <a name="managing-failover-to-secondary-instance"></a> Manage geo-failover to a geo-secondary instance
+
+The failover group will manage geo-failover of all databases on the primary managed instance. When a group is created, each database in the instance will be automatically geo-replicated to the geo-secondary instance. You cannot use failover groups to initiate a partial failover of a subset of databases.
+
+> [!IMPORTANT]
+> If a database is dropped on the primary managed instance, it will also be dropped automatically on the geo-secondary managed instance.
+
+## <a name="using-read-write-listener-for-oltp-workload"></a> Use the read-write listener (primary MI)
+
+For read-write workloads, use `<fog-name>.zone_id.database.windows.net` as the server name. Connections will be automatically directed to the primary. This name does not change after failover. The geo-failover involves updating the DNS record, so the client connections are redirected to the new primary only after the client DNS cache is refreshed. Because the secondary instance shares the DNS zone with the primary, the client application will be able to reconnect to it using the same server-side SAN certificate. The read-write listener and read-only listener cannot be reached via the [public endpoint for managed instance](public-endpoint-configure.md).
+
+## <a name="using-read-only-listener-to-connect-to-the-secondary-instance"></a> Use the read-only listener (secondary MI)
+
+If you have logically isolated read-only workloads that are tolerant to data latency, you can run them on the geo-secondary. To connect directly to the geo-secondary, use `<fog-name>.secondary.<zone_id>.database.windows.net` as the server name.
+
+In the Business Critical tier, SQL Managed Instance supports the use of [read-only replicas](../database/read-scale-out.md) to offload read-only query workloads, using the `ApplicationIntent=ReadOnly` parameter in the connection string. When you have configured a geo-replicated secondary, you can use this capability to connect to either a read-only replica in the primary location or in the geo-replicated location:
+
+- To connect to a read-only replica in the primary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.<zone_id>.database.windows.net`.
+- To connect to a read-only replica in the secondary location, use `ApplicationIntent=ReadOnly` and `<fog-name>.secondary.<zone_id>.database.windows.net`.
+
+The read-write listener and read-only listener cannot be reached via [public endpoint for managed instance](public-endpoint-configure.md).
++
+## Potential performance degradation after failover
+
+A typical Azure application uses multiple Azure services and consists of multiple components. The automatic geo-failover of the failover group is triggered based on the state the Azure SQL components alone. Other Azure services in the primary region may not be affected by the outage and their components may still be available in that region. Once the primary databases switch to the secondary region, the latency between the dependent components may increase. To avoid the impact of higher latency on the application's performance, ensure the redundancy of all the application's components in the secondary region and fail over application components together with the database.
+
+## Potential data loss after failover
+
+If an outage occurs in the primary region, recent transactions may not be able to replicate to the geo-secondary. Failover is deferred for the period you specify using `GracePeriodWithDataLossHours`. If you configured the automatic failover policy, be prepared for data loss. In general, during outages, Azure favors availability. Setting `GracePeriodWithDataLossHours` to a larger number, such as 24 hours, or disabling automatic geo-failover lets you reduce the likelihood of data loss at the expense of database availability.
+
+## DNS update
+
+The DNS update of the read-write listener will happen immediately after the failover is initiated. This operation will not result in data loss. However, the process of switching database roles can take up to 5 minutes under normal conditions. Until it is completed, some databases in the new primary instance will still be read-only. If a failover is initiated using PowerShell, the operation to switch the primary replica role is synchronous. If it is initiated using the Azure portal, the UI will indicate completion status. If it is initiated using the REST API, use standard Azure Resource ManagerΓÇÖs polling mechanism to monitor for completion.
+
+> [!IMPORTANT]
+> Use manual planned failover to move the primary back to the original location once the outage that caused the geo-failover is mitigated.
+
+
+## Enable scenarios dependent on objects from the system databases
+
+System databases are **not** replicated to the secondary instance in a failover group. To enable scenarios that depend on objects from the system databases, make sure to create the same objects on the secondary instance and keep them synchronized with the primary instance.
+
+For example, if you plan to use the same logins on the secondary instance, make sure to create them with the identical SID.
+
+```SQL
+-- Code to create login on the secondary instance
+CREATE LOGIN foo WITH PASSWORD = '<enterStrongPasswordHere>', SID = <login_sid>;
+```
+
+To learn more, see [Replication of logins and agent jobs](https://techcommunity.microsoft.com/t5/modernization-best-practices-and/azure-sql-managed-instance-sync-agent-jobs-and-logins-in/ba-p/2860495).
+
+## Synchronize instance properties and retention policies instances
+
+Instances in a failover group remain separate Azure resources, and no changes made to the configuration of the primary instance will be automatically replicated to the secondary instance. Make sure to perform all relevant changes both on primary _and_ secondary instance. For example, if you change backup storage redundancy or long-term backup retention policy on primary instance, make sure to change it on secondary instance as well.
++
+## <a name="using-failover-groups-and-virtual-network-rules"></a> Use failover groups and virtual network service endpoints
+
+If you are using [Virtual Network service endpoints and rules](../database/vnet-service-endpoint-rule-overview.md) to restrict access to your SQL Managed Instance, be aware that each virtual network service endpoint applies to only one Azure region. The endpoint does not enable other regions to accept communication from the subnet. Therefore, only the client applications deployed in the same region can connect to the primary database.
+
+## <a name="preventing-the-loss-of-critical-data"></a> Prevent loss of critical data
+
+<!--
+There is some overlap in the following content, be sure to update all that's necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+-->
+
+Due to the high latency of wide area networks, geo-replication uses an asynchronous replication mechanism. Asynchronous replication makes the possibility of data loss unavoidable if the primary fails. To protect critical transactions from data loss, an application developer can call the [sp_wait_for_database_copy_sync](/sql/relational-databases/system-stored-procedures/active-geo-replication-sp-wait-for-database-copy-sync) stored procedure immediately after committing the transaction. Calling `sp_wait_for_database_copy_sync` blocks the calling thread until the last committed transaction has been transmitted and hardened in the transaction log of the secondary database. However, it does not wait for the transmitted transactions to be replayed (redone) on the secondary. `sp_wait_for_database_copy_sync` is scoped to a specific geo-replication link. Any user with the connection rights to the primary database can call this procedure.
+
+> [!NOTE]
+> `sp_wait_for_database_copy_sync` prevents data loss after geo-failover for specific transactions, but does not guarantee full synchronization for read access. The delay caused by a `sp_wait_for_database_copy_sync` procedure call can be significant and depends on the size of the not yet transmitted transaction log on the primary at the time of the call.
+
+## Permissions
+
+<!--
+There is some overlap of content in the following articles, be sure to make changes to all if necessary:
+/azure-sql/database/auto-failover-group-sql-db.md
+/azure-sql/database/auto-failover-group-configure-sql-db.md
+/azure-sql/managed-instance/auto-failover-group-sql-mi.md
+/azure-sql/managed-instance/auto-failover-group-configure-sql-mi.md
+-->
+
+Permissions for a failover group are managed via [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
+
+Azure RBAC write access is necessary to create and manage failover groups. The [SQL Server Contributor role](../../role-based-access-control/built-in-roles.md#sql-server-contributor) has all the necessary permissions to manage failover groups.
+
+For specific permission scopes, review how to [configure auto-failover groups in Azure SQL Managed Instance](auto-failover-group-configure-sql-mi.md#permissions).
+
+## Limitations
+
+Be aware of the following limitations:
+
+- Failover groups cannot be created between two instances in the same Azure region.
+- Failover groups cannot be renamed. You will need to delete the group and re-create it with a different name.
+- Database rename is not supported for databases in failover group. You will need to temporarily delete failover group to be able to rename a database, or remove the database from the failover group.
+- System databases are not replicated to the secondary instance in a failover group. Therefore, scenarios that depend on objects from the system databases require objects to be manually created on the secondary instances and also manually kept in sync after any changes made on primary instance. The only exception is Service master Key (SMK) for SQL Managed Instance, that is replicated automatically to secondary instance during creation of failover group. Any subsequent changes of SMK on the primary instance however will not be replicated to secondary instance.
+- Failover groups cannot be created between instances if any of them are in an instance pool.
+
+## <a name="programmatically-managing-failover-groups"></a> Programmatically manage failover groups
+
+Auto-failover groups can also be managed programmatically using Azure PowerShell, Azure CLI, and REST API. The following tables describe the set of commands available. Active geo-replication includes a set of Azure Resource Manager APIs for management, including the [Azure SQL Database REST API](/rest/api/sql/) and [Azure PowerShell cmdlets](/powershell/azure/). These APIs require the use of resource groups and support Azure role-based access control (Azure RBAC). For more information on how to implement access roles, see [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md).
++
+# [PowerShell](#tab/azure-powershell)
+
+| Cmdlet | Description |
+| | |
+| [New-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/new-azsqldatabaseinstancefailovergroup) |This command creates a failover group and registers it on both primary and secondary instances|
+| [Set-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/set-azsqldatabaseinstancefailovergroup) |Modifies configuration of a failover group|
+| [Get-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/get-azsqldatabaseinstancefailovergroup) |Retrieves a failover group's configuration|
+| [Switch-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/switch-azsqldatabaseinstancefailovergroup) |Triggers failover of a failover group to the secondary instance|
+| [Remove-AzSqlDatabaseInstanceFailoverGroup](/powershell/module/az.sql/remove-azsqldatabaseinstancefailovergroup) | Removes a failover group|
++
+# [Azure CLI](#tab/azure-cli)
+
+| Command | Description |
+| | |
+| [az sql failover-group create](/cli/azure/sql/failover-group#az-sql-failover-group-create) |This command creates a failover group and registers it on both primary and secondary servers|
+| [az sql failover-group delete](/cli/azure/sql/failover-group#az-sql-failover-group-delete) | Removes a failover group from the server |
+| [az sql failover-group show](/cli/azure/sql/failover-group#az-sql-failover-group-show) | Retrieves a failover group configuration |
+| [az sql failover-group update](/cli/azure/sql/failover-group#az-sql-failover-group-update) |Modifies a failover group's configuration and/or adds one or more databases to a failover group|
+| [az sql failover-group set-primary](/cli/azure/sql/failover-group#az-sql-failover-group-set-primary) | Triggers failover of a failover group to the secondary server |
+
+# [REST API](#tab/rest-api)
+
+| API | Description |
+| | |
+| [Create or Update Failover Group](/rest/api/sql/instancefailovergroups/createorupdate) | Creates or updates a failover group's configuration |
+| [Delete Failover Group](/rest/api/sql/instancefailovergroups/delete) | Removes a failover group from the instance |
+| [Failover (Planned)](/rest/api/sql/instancefailovergroups/failover) | Triggers failover from the current primary instance to this instance with full data synchronization. |
+| [Force Failover Allow Data Loss](/rest/api/sql/instancefailovergroups/forcefailoverallowdataloss) | Triggers failover from the current primary instance to the secondary instance without synchronizing data. This operation may result in data loss. |
+| [Get Failover Group](/rest/api/sql/instancefailovergroups/get) | retrieves a failover group's configuration. |
+| [List Failover Groups - List By Location](/rest/api/sql/instancefailovergroups/listbylocation) | Lists the failover groups in a location. |
+++
+## Next steps
+
+- For detailed tutorials, see
+ - [Add a SQL Managed Instance to a failover group](../managed-instance/failover-group-add-instance-tutorial.md)
+- For a sample script, see:
+ - [Use PowerShell to create an auto-failover group on a SQL Managed Instance](scripts/add-to-failover-group-powershell.md)
+- For a business continuity overview and scenarios, see [Business continuity overview](../database/business-continuity-high-availability-disaster-recover-hadr-overview.md)
+- To learn about automated backups, see [SQL Database automated backups](../database/automated-backups-overview.md).
+- To learn about using automated backups for recovery, see [Restore a database from the service-initiated backups](../database/recovery-using-backups.md).
azure-sql Connectivity Architecture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/connectivity-architecture-overview.md
Let's take a deeper dive into connectivity architecture for SQL Managed Instance
![Connectivity architecture of the virtual cluster](./media/connectivity-architecture-overview/connectivityarch003.png)
-Clients connect to SQL Managed Instance by using a host name that has the form `<mi_name>.<dns_zone>.database.windows.net`. This host name resolves to a private IP address, although it's registered in a public Domain Name System (DNS) zone and is publicly resolvable. The `zone-id` is automatically generated when you create the cluster. If a newly created cluster hosts a secondary managed instance, it shares its zone ID with the primary cluster. For more information, see [Use auto failover groups to enable transparent and coordinated failover of multiple databases](../database/auto-failover-group-overview.md#enabling-geo-replication-between-managed-instances-and-their-vnets).
+Clients connect to SQL Managed Instance by using a host name that has the form `<mi_name>.<dns_zone>.database.windows.net`. This host name resolves to a private IP address, although it's registered in a public Domain Name System (DNS) zone and is publicly resolvable. The `zone-id` is automatically generated when you create the cluster. If a newly created cluster hosts a secondary managed instance, it shares its zone ID with the primary cluster. For more information, see [Use auto failover groups to enable transparent and coordinated failover of multiple databases](auto-failover-group-configure-sql-mi.md#enabling-geo-replication-between-managed-instances-and-their-vnets).
This private IP address belongs to the internal load balancer for SQL Managed Instance. The load balancer directs traffic to the SQL Managed Instance gateway. Because multiple managed instances can run inside the same cluster, the gateway uses the SQL Managed Instance host name to redirect traffic to the correct SQL engine service.
azure-sql Data Virtualization Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/data-virtualization-overview.md
Last updated 03/02/2022
# Data virtualization with Azure SQL Managed Instance (Preview) [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Azure SQL Managed Instance enables you to execute T-SQL queries that read data from files stored in Azure Data Lake Storage Gen2 or Azure Blob Storage, and to combine it in queries with locally stored relational data via joins. This way you can transparently access external data still allowing it to stay in its original format and location using the concept of data virtualization.
+Data virtualization with Azure SQL Managed Instance allows you to execute Transact-SQL (T-SQL) queries against data from files stored in Azure Data Lake Storage Gen2 or Azure Blob Storage, and combine it with locally stored relational data using joins. This way you can transparently access external data while keeping it in its original format and location - also known as data virtualization.
+
+Data virtualization is currently in preview for Azure SQL Managed Instance.
+ ## Overview
-There are two ways of querying external files, intended for different scenarios:
+Data virtualization provides two ways of querying external files stored in Azure Data Lake Storage or Azure Blob Storage, intended for different scenarios:
- OPENROWSET syntax ΓÇô optimized for ad-hoc querying of files. Typically used to quickly explore the content and the structure of a new set of files.-- External tables ΓÇô optimized for repetitive querying of files using identical syntax as if data were stored locally in the database. It requires few more preparation steps compared to the first option, but it allows more control over data access. ItΓÇÖs typically used in analytical workloads and for reporting.
+- External tables ΓÇô optimized for repetitive querying of files using identical syntax as if data were stored locally in the database. External tables require several preparation steps compared to the OPENROWSET syntax, but allow for more control over data access. External tables are typically used for analytical workloads and reporting.
+
+Parquet and delimited text (CSV) file formats are directly supported. The JSON file format is indirectly supported by specifying the CSV file format where queries return every document as a separate row. It's possible to parse rows further using `JSON_VALUE` and `OPENJSON`.
+
+## Getting started
+
+Use Transact-SQL (T-SQL) to explicitly enable the data virtualization feature before using it.
+
+To enable data virtualization capabilities, run the following command:
+
-File formats directly supported are parquet and delimited text (CSV). JSON file format is supported indirectly by specifying CSV file format and queries returning every document as a separate row. Rows can be further parsed using JSON_VALUE and OPENJSON.
+```sql
+exec sp_configure 'polybase_enabled', 1;
+go
+reconfigure;
+go
+```
-Location of the file(s) to be queried needs to be provided in a specific format, with location prefix corresponding to the type of the external source and endpoint/protocol used:
+Provide the location of the file(s) you intend to query using the location prefix corresponding to the type of external source and endpoint/protocol, such as the following examples:
```sql --Blob Storage endpoint
abs://<container>@<storage_account>.blob.core.windows.net/<path>/<file_name>.par
--Data Lake endpoint adls://<container>@<storage_account>.dfs.core.windows.net/<path>/<file_name>.parquet+ ``` > [!IMPORTANT]
-> Usage of the generic https:// prefix is discouraged and will be disabled in the future. Make sure you use endpoint-specific prefixes to avoid interruptions.
+> Using the generic `https://` prefix is discouraged and will be disabled in the future. Be sure to use endpoint-specific prefixes to avoid interruptions.
-The feature needs to be explicitly enabled before using it. Run the following commands to enable the data virtualization capabilities:
-```sql
-exec sp_configure 'polybase_enabled', 1;
-go
-reconfigure;
-go
-```
-If you're new to the data virtualization and want to quickly test functionality, start from querying publicly available data sets available in [Azure Open Datasets](https://docs.microsoft.com/azure/open-datasets/dataset-catalog), like the [Bing COVID-19 dataset](https://docs.microsoft.com/azure/open-datasets/dataset-bing-covid-19?tabs=azure-storage) allowing anonymous access:
+If you're new to data virtualization and want to quickly test functionality, start by querying publicly available data sets available in [Azure Open Datasets](/azure/open-datasets/dataset-catalog), like the [Bing COVID-19 dataset](/azure/open-datasets/dataset-bing-covid-19?tabs=azure-storage) allowing anonymous access.
-- Bing COVID-19 dataset - parquet: abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.parquet-- Bing COVID-19 dataset - CSV: abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv
+Use the following endpoints to query the Bing COVID-19 data sets:
-Once you have first queries executing successfully, you may want to switch to private data sets that require configuring specific access rights or firewall rules.
+- Parquet: `abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.parquet`
+- CSV: `abs://public@pandemicdatalake.blob.core.windows.net/curated/covid-19/bing_covid-19_data/latest/bing_covid-19_data.csv`
+
+Once your public data set queries are executing successfully, consider switching to private data sets that require configuring specific rights and/or firewall rules.
+
+To access a private location, use a Shared Access Signature (SAS) with proper access permissions and validity period to authenticate to the storage account. Create a database-scoped credential using the SAS key, rather than providing it directly in each query. The credential is then used as a parameter to access the external data source.
-To access a private location, you need to authenticate to the storage account using Shared Access Signature (SAS) key with proper access permissions and validity period. The SAS key isn't provided directly in each query. It's used for creation of a database-scoped credential, which is in turn provided as a parameter of an External Data Source.
-All the concepts outlined so far are described in detail in the following sections.
## External data source
-External Data Source is an abstraction intended for easier management of file locations across multiple queries and for referencing authentication parameters encapsulated in database-scoped credential.
+External data sources are abstractions intended to make it easier to manage file locations across multiple queries, and to reference authentication parameters that are encapsulated within database-scoped credentials.
+
+When accessing a public location, add the file location when querying the external data source:
-Public locations are described in an external data source by providing the file location path:
```sql CREATE EXTERNAL DATA SOURCE DemoPublicExternalDataSource
WITH (
) ```
-Private locations beside path require also reference to a credential to be provided:
+When accessing a private location, include the file path and credential when querying the external data source:
+ ```sql -- Step0 (optional): Create master key if it doesn't exist in the database:
WITH (
``` ## Query data sources using OPENROWSET
-[OPENROWSET](https://docs.microsoft.com/sql/t-sql/functions/openrowset-transact-sql) syntax enables instant and ad-hoc querying with minimal required database objects created. DATA_SOURCE parameter value is automatically prepended to the BULK parameter to form full path to the file. Format of the file also needs to be provided:
+
+The [OPENROWSET](/sql/t-sql/functions/openrowset-transact-sql) syntax enables instant ad-hoc querying while only creating the minimal number of database objects necessary.
+`OPENROWSET` only requires creating the external data source (and possibly the credential) as opposed to the external table approach which requires an external file format and the external table itself.
+
+The `DATA_SOURCE` parameter value is automatically prepended to the BULK parameter to form the full path to the file.
+
+When using `OPENROWSET` provide the format of the file, such as the following example, which queries a single file:
```sql SELECT TOP 10 *
FROM OPENROWSET(
``` ### Querying multiple files and folders
-While in the previous example OPENROWSET command queried a single file, it can also query multiple files or folders by using wildcards in the BULK path.
-Here's an example using [NYC yellow taxi trip records open data set](https://docs.microsoft.com/azure/open-datasets/dataset-taxi-yellow):
+
+The `OPENROWSET` command also allows querying multiple files or folders by using wildcards in the BULK path.
+
+The following example uses the [NYC yellow taxi trip records open data set](/azure/open-datasets/dataset-taxi-yellow):
```sql --Query all files with .parquet extension in folders matching name pattern:
FROM OPENROWSET(
FORMAT = 'parquet' ) AS filerows ```
-When you're querying multiple files or folders, all files accessed with the single OPENROWSET must have the same structure, that is, number of columns and their data types. Folders can't be traversed recursively.
+
+When querying multiple files or folders, all files accessed with the single `OPENROWSET` must have the same structure (such as the same number of columns and data types). Folders can't be traversed recursively.
### Schema inference
-The automatic schema inference helps you quickly write queries and explore data without knowing file schemas, as seen in previous sample scripts.
-The cost of the convenience is that inferred data types may be larger than the actual data types, affecting the performance of queries. This happens when there's no enough information in the source files to make sure the appropriate data type is used. For example, parquet files don't contain metadata about maximum character column length, so instance infers it as varchar(8000).
+Automatic schema inference helps you quickly write queries and explore data when you don't know file schemas. Schema inference only works with parquet format files.
-> [!NOTE]
-> Schema inference works only with files in the parquet format.
+While convenient, the cost is that inferred data types may be larger than the actual data types. This can lead to poor query performance since there may not be enough information in the source files to ensure the appropriate data type is used. For example, parquet files don't contain metadata about maximum character column length, so the instance infers it as varchar(8000).
++
+Use the [sp_describe_first_results_set](/sql/relational-databases/system-stored-procedures/sp-describe-first-result-set-transact-sql) stored procedure to check the resulting data types of your query, such as the following example:
-You can use sp_describe_first_results_set stored procedure to check the resulting data types of your query:
```sql EXEC sp_describe_first_result_set N' SELECT
EXEC sp_describe_first_result_set N'
) AS nyc'; ```
-Once you know the data types, you can specify them using WITH clause to improve the performance:
+Once you know the data types, you can then specify them using the `WITH` clause to improve performance:
+ ```sql SELECT TOP 100 vendor_id, pickup_datetime, passenger_count
passenger_count int
) AS nyc; ```
-For CSV files the schema canΓÇÖt be automatically determined, and you need to explicitly specify columns using WITH clause:
+Since the schema of CSV files can't be automatically determined, explicitly specify columns using the `WITH` clause:
+ ```sql SELECT TOP 10 *
WITH (
``` ### File metadata functions
-When querying multiple files or folders, you can use Filepath and Filename functions to read file metadata and get part of the path or full path and name of the file that the row in the result set originates from:
++
+When querying multiple files or folders, you can use `Filepath` and `Filename` functions to read file metadata and get part of the path or full path and name of the file that the row in the result set originates from:
++ ```sql --Query all files and project file path and file name information for each row: SELECT TOP 10 filerows.filepath(1) as [Year_Folder], filerows.filepath(2) as [Month_Folder],
FROM OPENROWSET(
FORMAT = 'parquet') AS filerows ```
-When called without a parameter, filepath function returns the file path that the row originates from. When DATA_SOURCE is used in OPENROWSET, it returns path relative to DATA_SOURCE, otherwise it returns full file path.
+When called without a parameter, the `Filepath` function returns the file path that the row originates from. When `DATA_SOURCE` is used in `OPENROWSET`, it returns the path relative to the `DATA_SOURCE`, otherwise it returns full file path.
When called with a parameter, it returns part of the path that matches the wildcard on the position specified in the parameter. For example, parameter value 1 would return part of the path that matches the first wildcard.
-Filepath function can also be used for filtering and aggregating rows:
+The `Filepath` function can also be used for filtering and aggregating rows:
+ ```sql SELECT r.filepath() AS filepath
ORDER BY
``` ### Creating view on top of OPENROWSET
-You can create and use views to wrap OPENROWSET for easy reusing of underlying query:
+
+You can create and use views to wrap OPENROWSET queries so that you can easily reuse the underlying query:
+ ```sql CREATE VIEW TaxiRides AS SELECT *
FROM OPENROWSET(
) AS filerows ```
-ItΓÇÖs also convenient to add columns with file location data to a view, using filepath function for easier and more performant filtering. It can reduce the number of files and the amount of data the query on top of the view needs to read and process when filtered by any of those columns:
+It's also convenient to add columns with the file location data to a view using the `Filepath` function for easier and more performant filtering. Using views can reduce the number of files and the amount of data the query on top of the view needs to read and process when filtered by any of those columns:
++ ```sql CREATE VIEW TaxiRides AS SELECT *
FROM OPENROWSET(
) AS filerows ```
-Views also enable reporting and analytic tools like Power BI to consume results of OPENROWSET.
+Views also enable reporting and analytic tools like Power BI to consume results of `OPENROWSET`.
## External tables
-External tables encapsulate access to the files making the querying experience almost identical to querying local relational data stored in user tables. Creation of an external table requires external data source and external file format objects to exist:
+
+External tables encapsulate access to files making the querying experience almost identical to querying local relational data stored in user tables. Creating an external table requires the external data source and external file format objects to exist:
```sql --Create external file format
WITH (
GO ```
-Once external table is created, you can query it just like any other table:
+Once the external table is created, you can query it just like any other table:
+ ```sql SELECT TOP 10 * FROM tbl_TaxiRides ```
-Just like OPENROWSET, external tables allow querying multiple files and folders by using wildcards. Schema inference and filepath/filename functions aren't supported with external tables.
+Just like `OPENROWSET`, external tables allow querying multiple files and folders by using wildcards. Schema inference and filepath/filename functions aren't supported with external tables.
## Performance considerations
-There's no hard limit in terms of number of files or amount of data that can be queried, but query performance will depend on the amount of data, data format, and complexity of queries and joins.
-Collecting statistics on your external data is one of the most important things you can do for query optimization. The more instance knows about your data, the faster it can execute queries.
+There's no hard limit in terms of number of files or amount of data that can be queried, but query performance depends on the amount of data, data format, and complexity of queries and joins.
+
+Collecting statistics on your external data is one of the most important things you can do for query optimization. The more the instance knows about your data, the faster it can execute queries. Automatic creation of statistics isn't supported, but you can and should create statistics manually.
### OPENROWSET statistics
-Single-column statistics for OPENROWSET path can be created using sp_create_openrowset_statistics
-stored procedure, by passing the select query with a single column as a parameter:
+
+Single-column statistics for the `OPENROWSET` path can be created using the `sp_create_openrowset_statistics` stored procedure, by passing the select query with a single column as a parameter:
+ ```sql EXEC sys.sp_create_openrowset_statistics N' SELECT pickup_datetime
FROM OPENROWSET(
' ```
-By default instance uses 100% of the data provided in the dataset for creating statistics. You can optionally specify sample size as a percentage using TABLESAMPLE options. To create single-column statistics for multiple columns, you should execute stored procedure for each of the columns. You canΓÇÖt create multi-column statistics for OPENROWSET path.
+By default, the instance uses 100% of the data provided in the dataset to create statistics. You can optionally specify the sample size as a percentage using the `TABLESAMPLE` options. To create single-column statistics for multiple columns, execute the stored procedure for each of the columns. You can't create multi-column statistics for the `OPENROWSET` path.
+
+To update existing statistics, drop them first using the `sp_drop_openrowset_statistics` stored procedure, and then recreate them using the `sp_create_openrowset_statistics`.
+
+To drop existing statistics, use the following example:
-To update existing statistics, drop them first using sp_drop_openrowset_statistics stored procedure, and then recreate them:
```sql EXEC sys.sp_drop_openrowset_statistics N' SELECT pickup_datetime
FROM OPENROWSET(
``` ### External table statistics
-Syntax for creating stats on external tables resembles the one used for ordinary user tables. To create statistics on a column, provide a name for the statistics object and the name of the column:
+
+The syntax for creating statistics on external tables resembles the one used for ordinary user tables. To create statistics on a column, provide a name for the statistics object and the name of the column:
+ ```sql CREATE STATISTICS sVendor ON tbl_TaxiRides (vendor_id) WITH FULLSCAN, NORECOMPUTE ```
-Provided WITH options are mandatory, and for the sample size allowed options are FULLSCAN and SAMPLE n percent.
-To create single-column statistics for multiple columns, execute stored procedure for each of the columns. You canΓÇÖt create multi-column statistics.
+The `WITH` options are mandatory, and for the sample size, the allowed options are `FULLSCAN` and `SAMPLE n` percent. To create single-column statistics for multiple columns, execute the stored procedure for each of the columns. Multi-column statistics are not supported.
## Next steps -- To learn more about syntax options available with OPENROWSET, see [OPENROWSET T-SQL](https://docs.microsoft.com/sql/t-sql/functions/openrowset-transact-sql).-- For more information about creating external table in SQL Managed Instance, see [CREATE EXTERNAL TABLE](https://docs.microsoft.com/sql/t-sql/statements/create-external-table-transact-sql).-- To learn more about creating external file format, see [CREATE EXTERNAL FILE FORMAT](https://docs.microsoft.com/sql/t-sql/statements/create-external-file-format-transact-sql)
+- To learn more about syntax options available with OPENROWSET, see [OPENROWSET T-SQL](/sql/t-sql/functions/openrowset-transact-sql).
+- For more information about creating external table in SQL Managed Instance, see [CREATE EXTERNAL TABLE](/sql/t-sql/statements/create-external-table-transact-sql).
+- To learn more about creating external file format, see [CREATE EXTERNAL FILE FORMAT](/sql/t-sql/statements/create-external-file-format-transact-sql)
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/doc-changes-updates-release-notes-whats-new.md
ms.devlang: Previously updated : 01/05/2022 Last updated : 03/02/2022 # What's new in Azure SQL Managed Instance? [!INCLUDE[appliesto-sqldb-sqlmi](../includes/appliesto-sqlmi.md)]
The following table lists the features of Azure SQL Managed Instance that are cu
| Feature | Details | | | | | [16 TB support in Business Critical](resource-limits.md#service-tier-characteristics) | Support for allocation up to 16 TB of space on SQL Managed Instance in the Business Critical service tier using the new memory optimized premium-series hardware generation. |
-|[Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) | Configure which Azure Storage accounts can be accessed from a SQL Managed Instance subnet. Grants an extra layer of protection against inadvertent or malicious data exfiltration.
+| [Data virtualization](data-virtualization-overview.md) | Join locally stored relational data with data queried from external data sources, such as Azure Data Lake Storage Gen2 or Azure Blob Storage. |
+|[Endpoint policies](../../azure-sql/managed-instance/service-endpoint-policies-configure.md) | Configure which Azure Storage accounts can be accessed from a SQL Managed Instance subnet. Grants an extra layer of protection against inadvertent or malicious data exfiltration.|
| [Instance pools](instance-pools-overview.md) | A convenient and cost-efficient way to migrate smaller SQL Server instances to the cloud. | | [Link feature](link-feature.md)| Online replication of SQL Server databases hosted anywhere to Azure SQL Managed Instance. | | [Maintenance window](../database/maintenance-window.md)| The maintenance window feature allows you to configure maintenance schedule for your Azure SQL Managed Instance. | | [Memory optimized premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new memory optimized premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. The memory optimized hardware generation offers higher memory to vCore ratios. | | [Migration with Log Replay Service](log-replay-service-migrate.md) | Migrate databases from SQL Server to SQL Managed Instance by using Log Replay Service. | | [Premium-series hardware generation](resource-limits.md#service-tier-characteristics) | Deploy your SQL Managed Instance to the new premium-series hardware generation to take advantage of the latest Intel Ice Lake CPUs. |
+| [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. |
| [Service Broker cross-instance message exchange](/sql/database-engine/configure-windows/sql-server-service-broker) | Support for cross-instance message exchange using Service Broker on Azure SQL Managed Instance. | | [SQL insights](../../azure-monitor/insights/sql-insights-overview.md) | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. | | [Transactional Replication](replication-transactional-overview.md) | Replicate the changes from your tables into other databases in SQL Managed Instance, SQL Database, or SQL Server. Or update your tables when some rows are changed in other instances of SQL Managed Instance or SQL Server. For information, see [Configure replication in Azure SQL Managed Instance](replication-between-two-instances-configure-tutorial.md). | | [Threat detection](threat-detection-configure.md) | Threat detection notifies you of security threats detected to your database. |
-| [Query Store hints](/sql/relational-databases/performance/query-store-hints?view=azuresqldb-mi-current&preserve-view=true) | Use query hints to optimize your query execution via the OPTION clause. |
+| [Windows Auth for Azure Active Directory principals](winauth-azuread-overview.md) | Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure SQL Managed Instance. |
||| ## General availability (GA)
The following table lists the features of Azure SQL Managed Instance that have t
Learn about significant changes to the Azure SQL Managed Instance documentation.
-### November 2021
+### March 2022
+
+| Changes | Details |
+| | |
+|**Windows Auth for Azure Active Directory principals preview** | Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience, and provides the basis for infrastructure modernization. Learn more in [Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance](winauth-azuread-overview.md). |
+| **Data virtualization preview** | It's now possible to query data in external sources such as Azure Data Lake Storage Gen2 or Azure Blob Storage, joining it with locally stored relational data. This feature is currently in preview. To learn more, see [Data virtualization](data-virtualization-overview.md). |
+|||
+
+### 2021
| Changes | Details | | | |
Learn about significant changes to the Azure SQL Managed Instance documentation.
|**Long-term backup retention GA** | Storing full backups for a specific database with configured redundancy for up to 10 years in Azure Blob storage is now generally available. To learn more, see [Long-term backup retention](long-term-backup-retention-configure.md). | | **Move instance to different subnet GA** | It's now possible to move your SQL Managed Instance to a different subnet. See [Move instance to different subnet](vnet-subnet-move-instance.md) to learn more. | |**New hardware generation preview** | There are now two new hardware generations for SQL Managed Instance: premium-series, and a memory optimized premium-series. Both offerings take advantage of a new generation of hardware powered by the latest Intel Ice Lake CPUs, and offer a higher memory to vCore ratio to support your most resource demanding database applications. As part of this announcement, the Gen5 hardware generation has been renamed to standard-series. The two new premium hardware generations are currently in preview. See [resource limits](resource-limits.md#service-tier-characteristics) to learn more. |
-| | |
--
-### October 2021
-
-| Changes | Details |
-| | |
|**Split what's new** | The previously-combined **What's new** article has been split by product - [What's new in SQL Database](../database/doc-changes-updates-release-notes-whats-new.md) and [What's new in SQL Managed Instance](doc-changes-updates-release-notes-whats-new.md), making it easier to identify what features are currently in preview, generally available, and significant documentation changes. Additionally, the [Known Issues in SQL Managed Instance](doc-changes-updates-known-issues.md) content has moved to its own page. |
-| | |
--
-### June 2021
-
-| Changes | Details |
-| | |
|**16 TB support for General Purpose preview** | Support has been added for allocation of up to 16 TB of space for SQL Managed Instance in the General Purpose service tier. See [resource limits](resource-limits.md) to learn more. This instance offer is currently in preview. | | **Parallel backup** | It's now possible to take backups in parallel for SQL Managed Instance in the general purpose tier, enabling faster backups. See the [Parallel backup for better performance](https://techcommunity.microsoft.com/t5/azure-sql/parallel-backup-for-better-performance-in-sql-managed-instance/ba-p/2421762) blog entry to learn more. | | **Azure AD-only authentication preview** | It's now possible to restrict authentication to your Azure SQL Managed Instance only to Azure Active Directory users. This feature is currently in preview. To learn more, see [Azure AD-only authentication](../database/authentication-azure-ad-only-authentication.md). | | **Resource Health monitor** | Use Resource Health to monitor the health status of your Azure SQL Managed Instance. See [Resource health](../database/resource-health-to-troubleshoot-connectivity.md) to learn more. | | **Granular permissions for data masking GA** | Granular permissions for dynamic data masking for Azure SQL Managed Instance is now generally available (GA). To learn more, see [Dynamic data masking](../database/dynamic-data-masking-overview.md#permissions). |
-| | |
--
-### April 2021
-
-| Changes | Details |
-| | |
| **User-defined routes (UDR) tables** | Service-aided subnet configuration for Azure SQL Managed Instance now makes use of service tags for user-defined routes (UDR) tables. See the [connectivity architecture](connectivity-architecture-overview.md) to learn more. |
-| | |
--
-### March 2021
-
-| Changes | Details |
-| | |
| **Audit management operations** | The ability to audit SQL Managed Instance operations is now generally available (GA). | | **Log Replay Service** | It's now possible to migrate databases from SQL Server to Azure SQL Managed Instance using the Log Replay Service. To learn more, see [Migrate with Log Replay Service](log-replay-service-migrate.md). This feature is currently in preview. | | **Long-term backup retention** | Support for Long-term backup retention up to 10 years on Azure SQL Managed Instance. To learn more, see [Long-term backup retention](long-term-backup-retention-configure.md)|
Learn about significant changes to the Azure SQL Managed Instance documentation.
| **Maintenance window** | The maintenance window feature allows you to configure a maintenance schedule for your Azure SQL Managed Instance, currently in preview. To learn more, see [maintenance window](../database/maintenance-window.md).| | **Service Broker message exchange** | The Service Broker component of Azure SQL Managed Instance allows you to compose your applications from independent, self-contained services, by providing native support for reliable and secure message exchange between the databases attached to the service. Currently in preview. To learn more, see [Service Broker](/sql/database-engine/configure-windows/sql-server-service-broker). | **SQL insights** | SQL insights is a comprehensive solution for monitoring any product in the Azure SQL family. SQL insights uses dynamic management views to expose the data you need to monitor health, diagnose problems, and tune performance. To learn more, see [SQL insights](../../azure-monitor/insights/sql-insights-overview.md). |
-|||
+|||
### 2020
The following changes were added to SQL Managed Instance and the documentation i
| **Configurable backup storage redundancy** | It's now possible to configure Locally redundant storage (LRS) and zone-redundant storage (ZRS) options for backup storage redundancy, providing more flexibility and choice. To learn more, see [Configure backup storage redundancy](../database/automated-backups-overview.md?tabs=managed-instance#configure-backup-storage-redundancy).| | **TDE-encrypted backup performance improvements** | It's now possible to set the point-in-time restore (PITR) backup retention period, and automated compression of backups encrypted with transparent data encryption (TDE) are now 30 percent more efficient in consuming backup storage space, saving costs for the end user. See [Change PITR](../database/automated-backups-overview.md?tabs=managed-instance#change-the-short-term-retention-policy) to learn more. | | **Azure AD authentication improvements** | Automate user creation using Azure AD applications and create individual Azure AD guest users (preview). To learn more, see [Directory readers in Azure AD](../database/authentication-aad-directory-readers-role.md)|
-| **Global VNet peering support** | Global virtual network peering support has been added to SQL Managed Instance, improving the geo-replication experience. See [geo-replication between managed instances](../database/auto-failover-group-overview.md?tabs=azure-powershell#enabling-geo-replication-between-managed-instances-and-their-vnets). |
+| **Global VNet peering support** | Global virtual network peering support has been added to SQL Managed Instance, improving the geo-replication experience. See [geo-replication between managed instances](auto-failover-group-configure-sql-mi.md#enabling-geo-replication-between-managed-instances-and-their-vnets). |
| **Hosting SSRS catalog databases** | SQL Managed Instance can now host catalog databases of SQL Server Reporting Services (SSRS) for versions 2017 and newer. | | **Major performance improvements** | Introducing improvements to SQL Managed Instance performance, including improved transaction log write throughput, improved data and log IOPS for business critical instances, and improved TempDB performance. See the [improved performance](https://techcommunity.microsoft.com/t5/azure-sql/announcing-major-performance-improvements-for-azure-sql-database/ba-p/1701256) tech community blog to learn more. | **Enhanced management experience** | Using the new [OPERATIONS API](/rest/api/sql/2021-02-01-preview/managed-instance-operations), it's now possible to check the progress of long-running instance operations. To learn more, see [Management operations](management-operations-overview.md?tabs=azure-portal).
azure-sql Failover Group Add Instance Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/failover-group-add-instance-tutorial.md
Last updated 08/27/2019
# Tutorial: Add SQL Managed Instance to a failover group [!INCLUDE[appliesto-sqlmi](../includes/appliesto-sqlmi.md)]
-Add managed instances of Azure SQL Managed Instance to a failover group. In this article, you will learn how to:
+> [!div class="op_single_selector"]
+> * [Azure SQL Database (single database)](../database/failover-group-add-single-database-tutorial.md)
+> * [Azure SQL Database (elastic pool)](../database/failover-group-add-elastic-pool-tutorial.md)
+> * [Azure SQL Managed Instance](failover-group-add-instance-tutorial.md)
+
+Add managed instances of Azure SQL Managed Instance to an [auto-failover group](auto-failover-group-sql-mi.md).
+
+In this tutorial, you will learn how to:
> [!div class="checklist"] > - Create a primary managed instance.
-> - Create a secondary managed instance as part of a [failover group](../database/auto-failover-group-overview.md).
+> - Create a secondary managed instance as part of a failover group.
> - Test failover. There are multiple ways to establish connectivity between managed instances in different virtual networks, including:
azure-sql How To Content Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/how-to-content-reference-guide.md
In this article you can find a content reference to various guides, scripts, and
- [Configure conditional access](../database/conditional-access-configure.md) - [Multi-factor Azure AD auth](../database/authentication-mfa-ssms-overview.md) - [Configure multi-factor auth](../database/authentication-mfa-ssms-configure.md)
+- [Configure auto-failover group](auto-failover-group-configure-sql-mi.md) to automatically failover all databases on an instance to a secondary instance in another region in the event of a disaster.
- [Configure a temporal retention policy](../database/temporal-tables-retention-policy.md) - [Configure TDE with BYOK](../database/transparent-data-encryption-byok-configure.md) - [Rotate TDE BYOK keys](../database/transparent-data-encryption-byok-key-rotation.md)
azure-sql Machine Learning Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/machine-learning-services-overview.md
For details on how this command affects SQL Managed Instance resources, see [Res
### Enable Machine Learning Services in a failover group
-In a [failover group](failover-group-add-instance-tutorial.md), system databases are not replicated to the secondary instance (see [Limitations of failover groups](../database/auto-failover-group-overview.md#limitations-of-failover-groups) for more information).
+In a [failover group](failover-group-add-instance-tutorial.md), system databases are not replicated to the secondary instance (see [Limitations of failover groups](auto-failover-group-sql-mi.md#limitations) for more information).
-If the Managed Instance you're using is part of a failover group, do the following:
+If the SQL Managed Instance you're using is part of a failover group, do the following:
- Run the `sp_configure` and `RECONFIGURE` commands on each instance of the failover group to enable Machine Learning Services.
azure-sql Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/resource-limits.md
SQL Managed Instance has two service tiers: [General Purpose](../database/servic
| Max number of database files per instance | Up to 280, unless the instance storage size or [Azure Premium Disk storage allocation space](doc-changes-updates-known-issues.md#exceeding-storage-space-with-small-database-files) limit has been reached. | 32,767 files per database, unless the instance storage size limit has been reached. | | Max data file size | Maximum size of each data file is 8 TB. Use at least two data files for databases larger than 8 TB. | Up to currently available instance size (depending on the number of vCores). | | Max log file size | Limited to 2 TB and currently available instance storage size. | Limited to 2 TB and currently available instance storage size. |
-| Data/Log IOPS (approximate) | Up to 30-40 K IOPS per instance*, 500 - 7500 per file<br/>\*[Increase file size to get more IOPS](#file-io-characteristics-in-general-purpose-tier)| 16 K - 320 K (4000 IOPS/vCore)<br/>Add more vCores to get better IO performance. |
+| Data/Log IOPS (approximate) | 500 - 7500 per file<br/>\*[Increase file size to get more IOPS](#file-io-characteristics-in-general-purpose-tier)| 16 K - 320 K (4000 IOPS/vCore)<br/>Add more vCores to get better IO performance. |
| Log write throughput limit (per instance) | 3 MB/s per vCore<br/>Max 120 MB/s per instance<br/>22 - 65 MB/s per DB (depending on log file size)<br/>\*[Increase the file size to get better IO performance](#file-io-characteristics-in-general-purpose-tier) | 4 MB/s per vCore<br/>Max 96 MB/s | | Data throughput (approximate) | 100 - 250 MB/s per file<br/>\*[Increase the file size to get better IO performance](#file-io-characteristics-in-general-purpose-tier) | Not limited. | | Storage IO latency (approximate) | 5-10 ms | 1-2 ms | | In-memory OLTP | Not supported | Available, [size depends on number of vCore](#in-memory-oltp-available-space) | | Max sessions | 30000 | 30000 |
-| Max concurrent workers | 105 * number of vCores + 800 | 105 * vCore count + 800 |
+| Max concurrent workers | 105 * number of vCores + 800 | 105 * number of vCores + 800 |
| [Read-only replicas](../database/read-scale-out.md) | 0 | 1 (included in price) | | Compute isolation | Not supported as General Purpose instances may share physical hardware with other instances| **Standard-series (Gen5)**:<br/> Supported for 40, 64, 80 vCores<BR> **Premium-series**: Supported for 64, 80 vCores <BR> **Memory optimized premium-series**: Supported for 64 vCores |
A few additional considerations:
- **Currently available instance storage size** is the difference between reserved instance size and the used storage space. - Both data and log file size in the user and system databases are included in the instance storage size that is compared with the max storage size limit. Use the [sys.master_files](/sql/relational-databases/system-catalog-views/sys-master-files-transact-sql) system view to determine the total used space by databases. Error logs are not persisted and not included in the size. Backups are not included in storage size. - Throughput and IOPS in the General Purpose tier also depend on the [file size](#file-io-characteristics-in-general-purpose-tier) that is not explicitly limited by the SQL Managed Instance.
- You can create another readable replica in a different Azure region using [auto-failover groups](../database/auto-failover-group-configure.md)
+ You can create another readable replica in a different Azure region using [auto-failover groups](auto-failover-group-configure-sql-mi.md)
- Max instance IOPS depend on the file layout and distribution of workload. As an example, if you create 7 x 1 TB files with max 5 K IOPS each and seven small files (smaller than 128 GB) with 500 IOPS each, you can get 38500 IOPS per instance (7x5000+7x500) if your workload can use all files. Note that some IOPS are also used for auto-backups. Find more information about the [resource limits in SQL Managed Instance pools in this article](instance-pools-overview.md#resource-limitations).
azure-sql Winauth Azuread Kerberos Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-kerberos-managed-instance.md
+
+ Title: Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
+
+description: Learn how to configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory.
+++
+ms.devlang:
++++ Last updated : 03/01/2022++
+# Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)
+
+This article describes how to configure a managed instance to support [Windows Authentication for Azure AD principals](winauth-azuread-overview.md). The steps to set up Azure SQL Managed Instance are the same for both the [incoming trust-based authentication flow](winauth-azuread-setup-incoming-trust-based-flow.md) and the [modern interactive authentication flow](winauth-azuread-setup-modern-interactive-flow.md).
+
+## Prerequisites
+
+The following prerequisites are required to configure a managed instance for Windows Authentication for Azure AD principals:
+
+|Prerequisite | Description |
+|||
+|Az.Sql PowerShell module | This PowerShell module provides management cmdlets for Azure SQL resources.<BR/><BR/> Install this module by running the following PowerShell command: `Install-Module -Name Az.Sql` |
+|Azure Active Directory PowerShell Module | This module provides management cmdlets for Azure AD administrative tasks such as user and service principal management.<BR/><BR/> Install this module by running the following PowerShell command: `Install-Module ΓÇôName AzureAD` |
+| A managed instance | You may [create a new managed instance](../../azure-arc/dat) on the managed instance. |
+
+## Configure Azure AD Authentication for Azure SQL Managed Instance
+
+To enable Windows Authentication for Azure AD Principals, you need to enable a system assigned service principal on each managed instance. The system assigned service principal allows managed instance users to authenticate using the Kerberos protocol. You also need to grant admin consent to each service principal.
+### Enable a system assigned service principal
+
+To enable a system assigned service principal for a managed instance:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Navigate to your managed instance
+1. Select **Identity**.
+1. Set **System assigned service principal** to **On**.
+ :::image type="content" source="media/winauth-azuread/azure-portal-managed-instance-identity-enable-system-assigned-service-principal.png" alt-text="Screenshot of the identity pane for a managed instance in the Azure portal. Under 'System assigned service principal' the radio button next to the 'Status' label has been set to 'On'." lightbox="media/winauth-azuread/azure-portal-managed-instance-identity-enable-system-assigned-service-principal.png":::
+1. Select **Save**.
+
+### Grant admin consent to a system assigned service principal
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Open Azure Active Directory.
+1. Select **App registrations**.
+1. Select **All applications**.
+ :::image type="content" source="media/winauth-azuread/azure-portal-azuread-app-registrations.png" alt-text="Screenshot of the Azure portal. Azure Active Directory is open. App registrations is selected in the left pane. App applications is highlighted in the right pane." lightbox="media/winauth-azuread/azure-portal-azuread-app-registrations.png":::
+1. Select the application with the display name matching your managed instance. The name will be in the format: `<managedinstancename> principal`.
+1. Select **API permissions**.
+1. Select **Grant admin consent**.
+
+ :::image type="content" source="media/winauth-azuread/azure-portal-configure-permissions-admin-consent.png" alt-text="Screenshot from the Azure portal of the configured permissions for applications. The status for the example application is 'Granted for aadsqlmi'." lightbox="media/winauth-azuread/azure-portal-configure-permissions-admin-consent.png":::
+1. Select **Yes** on the prompt to **Grant admin consent confirmation**.
+
+## Connect to the managed instance with Windows Authentication
+
+If you have already implemented either the incoming [trust-based authentication flow](winauth-azuread-setup-incoming-trust-based-flow.md) or the [modern interactive authentication flow](winauth-azuread-setup-modern-interactive-flow.md), depending on the version of your client, you can now test connecting to your managed instance with Windows Authentication.
+
+To test the connection with [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS), follow the steps in [Quickstart: Use SSMS to connect to and query Azure SQL Database or Azure SQL Managed Instance](../database/connect-query-ssms.md). Select **Windows Authentication** as your authentication type.
++
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-overview.md
+
+ Title: What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)
+
+description: Learn about Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance.
+++
+ms.devlang:
++++ Last updated : 03/01/2022++
+# What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)
+
+[Azure SQL Managed Instance](sql-managed-instance-paas-overview.md) is the intelligent, scalable cloud database service that combines the broadest SQL Server database engine compatibility with the benefits of a fully managed and evergreen platform as a service. Kerberos authentication for Azure Active Directory (Azure AD) enables Windows Authentication access to Azure SQL Managed Instance. Windows Authentication for managed instances empowers customers to move existing services to the cloud while maintaining a seamless user experience and provides the basis for infrastructure modernization.
+
+## Key capabilities and scenarios
+
+As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview.md) options:
+
+- 'Azure Active Directory - Password' offers authentication with Azure AD credentials
+- 'Azure Active Directory - Universal with MFA' adds multi-factor authentication
+- 'Azure Active Directory ΓÇô Integrated' uses federation providers like [Active Directory Federation Services](/windows-server/identity/active-directory-federation-services) (ADFS) to enable Single Sign-On experiences
+
+However, some legacy apps can't change their authentication to Azure AD: legacy application code may longer be available, there may be a dependency on legacy drivers, clients may not be able to be changed, and so on. Windows Authentication for Azure AD principals removes this migration blocker and provides support for a broader range of customer applications.
+
+Windows Authentication for Azure AD principals on managed instances is available for devices or virtual machines (VMs) joined to Active Directory (AD), Azure AD, or hybrid Azure AD. An Azure AD hybrid user whose user identity exists both in Azure AD and AD can access a managed instance in Azure using Azure AD Kerberos.
+
+Enabling Windows Authentication for a managed instance doesn't require customers to deploy new on-premises infrastructure or manage the overhead of setting up Domain Services.
+
+Windows Authentication for Azure AD principals on Azure SQL Managed Instance enables two key scenarios: migrating on-premises SQL Servers to Azure with minimal changes and modernizing security infrastructure.
+
+### Lift and shift on-premises SQL Servers to Azure with minimal changes
+
+By enabling Windows Authentication for Azure Active Directory principals, customers can migrate to Azure SQL Managed Instance without implementing changes to application authentication stacks or deploying Azure AD Domain Services. Customers can also use Windows Authentication to access a managed instance from their AD or Azure AD joined devices.
+
+Windows Authentication for Azure Active Directory principals also enables the following patterns on managed instances. These patterns are frequently used in traditional on-premises SQL Servers:
++
+- **"Double hop" authentication**: Web applications use IIS identity impersonation to run queries against an instance in the security context of the end user.
+- **Traces using extended events and SQL Server Profiler** can be launched using Windows authentication, providing ease of use for database administrators and developers accustomed to this workflow. Learn how to [run a trace against Azure SQL Managed Instance using Windows Authentication for Azure Active Directory principals](winauth-azuread-run-trace-managed-instance.md).
+
+### Modernize security infrastructure
+
+Enabling Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance equips customers to modernize their security practices.
+
+For example, a customer can enable a mobile analyst, using proven tools that rely on Windows Authentication, to authenticate to a managed instance using biometric credentials. This can be accomplished even if the mobile analyst works from a laptop that is joined to Azure AD.
+
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Run Trace Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-run-trace-managed-instance.md
+
+ Title: Run a trace against Azure SQL Managed Instance using Windows Authentication for Azure Active Directory principals (preview)
+description: Learn how to run a trace against Azure SQL Managed Instance using Authentication for Azure Active Directory principals
++++++ Last updated : 03/01/2022++
+# Run a trace against Azure SQL Managed Instance using Windows Authentication for Azure Active Directory principals (preview)
+
+This article shows how to connect and run a trace against Azure SQL Managed Instance using Windows Authentication for Azure Active Directory (Azure AD) principals. Windows authentication provides a convenient way for customers to connect to a managed instance, especially for database administrators and developers who are accustomed to launching [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS) with their Windows credentials.
+
+This article shares two options to run a trace against a managed instance: you can trace with [extended events](/sql/relational-databases/extended-events/extended-events) or with [SQL Server Profiler](/sql/tools/sql-server-profiler/sql-server-profiler). While SQL Server Profiler may still be used, the trace functionality used by SQL Server Profiler is deprecated and will be removed in a future version of Microsoft SQL Server.
+
+## Prerequisites
+
+To use Windows Authentication to connect to and run a trace against a managed instance, you must first meet the following prerequisites:
+
+- [Set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md).
+- Install [SQL Server Management Studio](/sql/ssms/download-sql-server-management-studio-ssms) (SSMS) on the client that is connecting to the managed instance. The SSMS installation includes SQL Server Profiler and built-in components to create and run extended events traces.
+- Enable tooling on your client machine to connect to the managed instance. This may be done by any of the following:
+ - [Configure an Azure VM to connect to Azure SQL Managed Instance](connect-vm-instance-configure.md).
+ - [Configure a point-to-site connection to Azure SQL Managed Instance from on-premises](point-to-site-p2s-configure.md).
+ - [Configure a public endpoint in Azure SQL Managed Instance](public-endpoint-configure.md).
+- To create or modify extended events sessions, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER ANY EVENT SESSION on the managed instance.
+- To create or modify traces in SQL Server Profiler, ensure that your account has the [server permission](/sql/t-sql/statements/grant-server-permissions-transact-sql) of ALTER TRACE on the managed instance.
+
+If you have not yet enabled Windows authentication for Azure AD principals against your managed instance, you may run a trace against a managed instance using an [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview.md) option, including:
+
+- 'Azure Active Directory - Password'
+- 'Azure Active Directory - Universal with MFA'
+- 'Azure Active Directory ΓÇô Integrated'
+
+## Run a trace with extended events
+
+To run a trace with extended events against a managed instance using Windows Authentication, you will first connect Object Explorer to your managed instance using Windows Authentication.
+
+1. Launch SQL Server Management Studio from a client machine where you have logged in using Windows Authentication.
+1. The 'Connect to Server' dialog box should automatically appear. If it does not, ensure that **Object Explorer** is open and select **Connect**.
+1. Enter the name of your managed instance as the **Server name**. The name of your managed instance should be in a format similar to `managedinstancename.12a34b5c67ce.database.windows.net`.
+1. After **Authentication**, select **Windows Authentication**.
+
+ :::image type="content" source="media/winauth-azuread/winauth-connect-to-managed-instance.png" alt-text="Dialog box from SQL Server Management Studio with a managed instance name in the 'Server Name' area and 'Authentication' set to 'Windows Authentication'.":::
+
+1. Select **Connect**.
+
+Now that **Object Explorer** is connected, you can create and run an extended events trace. Follow the steps in [Quick Start: Extended events in SQL Server](/sql/relational-databases/extended-events/quick-start-extended-events-in-sql-server) to learn how to create, test, and display the results of an extended events session.
+
+## Run a trace with Profiler
+
+To run a trace with SQL Server Profiler against a managed instance using Windows Authentication, launch the Profiler application. Profiler may be [run from the Windows Start menu or from SQL Server Management Studio](/sql/tools/sql-server-profiler/start-sql-server-profiler).
+
+1. On the File menu, select **New Trace**.
+1. Enter the name of your managed instance as the **Server name**. The name of your managed instance should be in a format similar to `managedinstancename.12a34b5c67ce.database.windows.net`.
+1. After **Authentication**, select **Windows Authentication**.
+
+ :::image type="content" source="media/winauth-azuread/winauth-connect-to-managed-instance.png" alt-text="Dialog box from SQL Server Management Studio with a managed instance name in the 'Server Name' area and 'Authentication' set to 'Windows Authentication'.":::
+
+1. Select **Connect**.
+1. Follow the steps in [Create a Trace (SQL Server Profiler)](/sql/tools/sql-server-profiler/create-a-trace-sql-server-profiler) to configure the trace.
+1. Select **Run** after configuring the trace.
+
+## Next steps
+
+Learn more about Windows Authentication for Azure AD principals with Azure SQL Managed Instance:
+
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
+- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)
+- [Extended Events](/sql/relational-databases/extended-events/extended-events)
azure-sql Winauth Azuread Setup Incoming Trust Based Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-incoming-trust-based-flow.md
+
+ Title: How to set up Windows Authentication for Azure Active Directory with the incoming trust-based flow (Preview)
+
+description: Learn how to set up Windows authentication for Azure Active Directory with the incoming trust-based flow.
+++
+ms.devlang:
++++ Last updated : 03/01/2022++
+# How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)
+
+This article describes how to implement the incoming trust-based authentication flow to allow Active Directory (AD) joined clients running Windows 10, Windows Server 2012, or higher versions of Windows to authenticate to an Azure SQL Managed Instance using Windows Authentication. This article also shares steps to rotate a Kerberos Key for your Azure Active Directory (Azure AD) service account and Trusted Domain Object, and steps to remove a Trusted Domain Object and all Kerberos settings, if desired.
+
+Enabling the incoming trust-based authentication flow is one step in [setting up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md). The [modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md) is available for enlightened clients running Windows 10 20H1, Windows Server 2022, or a higher version of Windows.
+
+## Permissions
+
+To complete the steps outlined in this article, you will need:
+
+- An on-premises Active Directory administrator username and password.
+- Azure AD global administrator account username and password.
+
+## Prerequisites
+
+To implement the incoming trust-based authentication flow, first ensure that the following prerequisites have been met:
+
+|Prerequisite |Description |
+|||
+|Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. |
+|Azure tenant. | |
+|Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
+|Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
+| | |
+
+## Create and configure the Azure AD Kerberos Trusted Domain Object
+
+To create and configure the Azure AD Kerberos Trusted Domain Object, you will install the Azure AD Hybrid Authentication Management PowerShell module.
+
+You will then use the Azure AD Hybrid Authentication Management PowerShell module to set up a Trusted Domain Object in the on-premises AD domain and register trust information with Azure AD. This creates an in-bound trust relationship into the on-premises AD, which enables on-premises AD to trust Azure AD.
+
+### Set up the Trusted Domain Object
+
+To set up the Trusted Domain Object, first install the Azure AD Hybrid Authentication Management PowerShell module.
+
+#### Install the Azure AD Hybrid Authentication Management PowerShell module
+
+1. Start a Windows PowerShell session with the **Run as administrator** option.
+
+1. Install the Azure AD Hybrid Authentication Management PowerShell module using the following script. The script:
+
+ - Enables TLS 1.2 for communication.
+ - Installs the NuGet package provider.
+ - Registers the PSGallery repository.
+ - Installs the PowerShellGet module.
+ - Installs the Azure AD Hybrid Authentication Management PowerShell module.
+ - The Azure AD Hybrid Authentication Management PowerShell uses the AzureADPreview module, which provides advanced Azure AD management feature.
+ - To protect against unnecessary installation conflicts with AzureAD PowerShell module, this command includes the ΓÇôAllowClobber option flag.
+
+```powershell
+[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
+
+Install-PackageProvider -Name NuGet -Force
+
+if (@(Get-PSRepository | ? {$_.Name -eq "PSGallery"}).Count -eq 0){
+ Register-PSRepository -DefaultSet-PSRepository -Name "PSGallery" -InstallationPolicy Trusted
+}
+
+Install-Module -Name PowerShellGet -Force
+
+Install-Module -Name AzureADHybridAuthenticationManagement -AllowClobber
+```
+
+#### Create the Trusted Domain Object
+
+1. Start a Windows PowerShell session with the **Run as administrator** option.
+
+1. Set the common parameters. Customize the script below prior to running it.
+
+ - Set the `$domain` parameter to your on-premises Active Directory domain name.
+ - When prompted by `Get-Credential`, enter an on-premises Active Directory administrator username and password.
+ - Set the `$cloudUserName` parameter to the username of a Global Administrator privileged account for Azure AD cloud access.
+
+ > [!NOTE]
+ > If you wish to use your current Windows login account for your on-premises Active Directory access, you can skip the step where credentials are assigned to the `$domainCred` parameter. If you take this approach, do not include the `-DomainCredential` parameter in the PowerShell commands following this step.
++
+ ```powershell
+ $domain = "your on-premesis domain name, for example contoso.com"
+
+ $domainCred = Get-Credential
+
+ $cloudUserName = "Azure AD user principal name, for example admin@contoso.onmicrosoft.com"
+ ```
+
+1. Check the current Kerberos Domain Settings.
+
+ Run the following command to check your domain's current Kerberos settings:
+
+ ```powershell
+ Get-AzureAdKerberosServer -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName
+ ```
+
+ If this is the first time calling any Azure AD Kerberos command, you will be prompted for Azure AD cloud access.
+ - Enter the password for your Azure AD global administrator account.
+ - If your organization uses other modern authentication methods such as MFA (Azure Multi-Factor Authentication) or Smart Card, follow the instructions as requested for sign in.
+
+ If this is the first time you're configuring Azure AD Kerberos settings, the [Get-AzureAdKerberosServer cmdlet](/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#view-and-verify-the-azure-ad-kerberos-server) will display empty information, as in the following sample output:
+
+ ```
+ ID :
+ UserAccount :
+ ComputerAccount :
+ DisplayName :
+ DomainDnsName :
+ KeyVersion :
+ KeyUpdatedOn :
+ KeyUpdatedFrom :
+ CloudDisplayName :
+ CloudDomainDnsName :
+ CloudId :
+ CloudKeyVersion :
+ CloudKeyUpdatedOn :
+ CloudTrustDisplay :
+ ```
+
+ If your domain already supports FIDO authentication, the `Get-AzureAdKerberosServer` cmdlet will display Azure AD Service account information, as in the following sample output. Note that the `CloudTrustDisplay` field returns an empty value.
+
+ ```
+ ID : 25614
+ UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
+ ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
+ DisplayName : krbtgt_25614
+ DomainDnsName : aadsqlmi.net
+ KeyVersion : 53325
+ KeyUpdatedOn : 2/24/2022 9:03:15 AM
+ KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
+ CloudDisplayName : krbtgt_25614
+ CloudDomainDnsName : aadsqlmi.net
+ CloudId : 25614
+ CloudKeyVersion : 53325
+ CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
+ CloudTrustDisplay :
+ ```
+
+1. Add the Trusted Domain Object.
+
+ Run the [Set-AzureAdKerberosServer PowerShell cmdlet](/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises#create-a-kerberos-server-object) to add the Trusted Domain Object. Be sure to include `-SetupCloudTrust` parameter. If there is no Azure AD service account, this command will create a new Azure AD service account. If there is an Azure AD service account already, this command will only create the requested Trusted Domain object.
+
+ ```powershell
+ Set-AzureAdKerberosServer -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName `
+ -SetupCloudTrust
+ ```
+
+ After creating the Trusted Domain Object, you can check the updated Kerberos Settings using the `Get-AzureAdKerberosServer` PowerShell cmdlet, as shown in the previous step. If the `Set-AzureAdKerberosServer` cmdlet has been run successfully with the `-SetupCloudTrust` parameter, the `CloudTrustDisplay` field should now return `Microsoft.AzureAD.Kdc.Service.TrustDisplay`, as in the following sample output:
+
+ ```
+ ID : 25614
+ UserAccount : CN=krbtgt-AzureAD, CN=Users, DC=aadsqlmi, DC=net
+ ComputerAccount : CN=AzureADKerberos, OU=Domain Controllers, DC=aadsqlmi, DC=net
+ DisplayName : krbtgt_25614
+ DomainDnsName : aadsqlmi.net
+ KeyVersion : 53325
+ KeyUpdatedOn : 2/24/2022 9:03:15 AM
+ KeyUpdatedFrom : ds-aad-auth-dem.aadsqlmi.net
+ CloudDisplayName : krbtgt_25614
+ CloudDomainDnsName : aadsqlmi.net
+ CloudId : 25614
+ CloudKeyVersion : 53325
+ CloudKeyUpdatedOn : 2/24/2022 9:03:15 AM
+ CloudTrustDisplay : Microsoft.AzureAD.Kdc.Service.TrustDisplay
+ ```
+
+## Configure the Group Policy Object (GPO)
+
+1. Identify your [Azure AD tenant ID](/azure/active-directory/fundamentals/active-directory-how-to-find-tenant.md).
+
+1. Deploy the following Group Policy setting to client machines using the incoming trust-based flow:
+
+ 1. Edit the **Administrative Templates\System\Kerberos\Specify KDC proxy servers for Kerberos clients** policy setting.
+ 1. Select **Enabled**.
+ 1. Under **Options**, select **Show...**. This opens the Show Contents dialog box.
+
+ :::image type="content" source="media/winauth-azuread/configure-policy-kdc-proxy.png" alt-text="Screenshot of dialog box to enable 'Specify KDC proxy servers for Kerberos clients'. The 'Show Contents' dialog allows input of a value name and the related value." lightbox="media/winauth-azuread/configure-policy-kdc-proxy.png":::
+
+ 1. Define the KDC proxy servers settings using mappings as follows. Substitute your Azure AD tenant ID for the `your_Azure_AD_tenant_id` placeholder. Note the space following `https` and the space prior to the closing `/` in the value mapping.
+
+ |Value name |Value |
+ |||
+ |KERBEROS.MICROSOFTONLINE.COM | <https login.microsoftonline.com:443:`your_Azure_AD_tenant_id`/kerberos /> |
+
+ :::image type="content" source="media/winauth-azuread/configure-policy-kdc-proxy-server-settings-detail.png" alt-text="Screenshot of the 'Define KDC proxy server settings' dialog box. A table allows input of multiple rows. Each row consists of a value name and a value.":::
+
+ 1. Select **OK** to close the 'Show Contents' dialog box.
+ 1. Select **Apply** on the 'Specify KDC proxy servers for Kerberos clients' dialog box.
+
+## Rotate the Kerberos Key
+
+You may periodically rotate the Kerberos Key for the created Azure AD Service account and Trusted Domain Object for management purposes.
+
+```powershell
+Set-AzureAdKerberosServer -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName -SetupCloudTrust `
+ -RotateServerKey
+```
+
+Once the key is rotated, it takes several hours to propagate the changed key between the Kerberos KDC servers. Due to this key distribution timing, you are limited to rotating key once within 24 hours. If you need to rotate the key again within 24 hours with any reason, for example, just after creating the Trusted Domain Object, you can add the `-Force` parameter:
+
+```powershell
+Set-AzureAdKerberosServer -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName -SetupCloudTrust `
+ -RotateServerKey -Force
+```
+
+## Remove the Trusted Domain Object
+
+You can remove the added Trusted Domain Object using the following command:
+
+```powershell
+Remove-AzureADKerberosTrustedDomainObject -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName
+```
+
+This command will only remove the Trusted Domain Object. If your domain supports FIDO authentication, you can remove the Trusted Domain Object while maintaining the Azure AD Service account required for the FIDO authentication service.
+
+## Remove all Kerberos Settings
+
+You can remove both the Azure AD Service account and the Trusted Domain Object using the following command:
+
+```powershell
+Remove-AzureAdKerberosServer -Domain $domain `
+ -DomainCredential $domainCred `
+ -UserPrincipalName $cloudUserName
+```
+
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
azure-sql Winauth Azuread Setup Modern Interactive Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup-modern-interactive-flow.md
+
+ Title: How to set up Windows authentication for Azure Active Directory with the modern interactive flow (Preview)
+
+description: Learn how to set up Windows Authentication for Azure Active Directory with the modern interactive flow.
+++
+ms.devlang:
++++ Last updated : 03/01/2022++
+# How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)
+
+This article describes how to implement the modern interactive authentication flow to allow enlightened clients running Windows 10 20H1, Windows Server 2022, or a higher version of Windows to authenticate to Azure SQL Managed Instance using Windows Authentication. Clients must be joined to Azure Active Directory (Azure AD) or Hybrid Azure AD.
+
+Enabling the modern interactive authentication flow is one step in [setting up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md). The [incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) is available for AD joined clients running Windows 10 / Windows Server 2012 and higher.
+
+With this preview, Azure AD is now its own independent Kerberos realm. Windows 10 21H1 clients are already enlightened and will redirect clients to access Azure AD Kerberos to request a Kerberos ticket. The capability for clients to access Azure AD Kerberos is switched off by default and can be enabled by modifying group policy. Group policy can be used to deploy this feature in a staged manner by choosing specific clients you want to pilot on and then expanding it to all the clients across your environment.
+
+## Prerequisites
+
+There is no AD to Azure AD set up required for enabling software running on Azure AD Joined VMs to access Azure SQL Managed Instance using Windows Authentication. The following prerequisites are required to implement the modern interactive authentication flow:
+
+|Prerequisite |Description |
+|||
+|Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. |
+|Azure AD tenant. | |
+|Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
+| | |
++
+## Configure group policy
+
+Enable the following group policy setting `Administrative Templates\System\Kerberos\Allow retrieving the cloud Kerberos ticket during the logon`:
+
+1. Open the group policy editor.
+1. Navigate to `Administrative Templates\System\Kerberos\`.
+1. Select the **Allow retrieving the cloud kerberos ticket during the logon** setting.
+
+ :::image type="content" source="media/winauth-azuread/policy-allow-retrieving-cloud-kerberos-ticket-during-logon.png" alt-text="A list of kerberos policy settings in the Windows policy editor. The 'Allow retrieving the cloud kerberos tikcet during the logon' policy is highlighted with a red box." lightbox="media/winauth-azuread/policy-allow-retrieving-cloud-kerberos-ticket-during-logon.png":::
+
+1. In the setting dialog, select **Enabled**.
+1. Select **OK**.
+
+ :::image type="content" source="media/winauth-azuread/policy-enable-cloud-kerberos-ticket-during-logon-setting.png" alt-text="Screenshot of the 'Allow retrieving the cloud kerberos ticket during the logon' dialog. Select 'Enabled' and then 'OK' to enable the policy setting." lightbox="media/winauth-azuread/policy-enable-cloud-kerberos-ticket-during-logon-setting.png":::
+
+## Refresh PRT (optional)
+
+Users with existing logon sessions may need to refresh their Azure AD Primary Refresh Token (PRT) if they attempt to use this feature immediately after it has been enabled. It can take up to a few hours for the PRT to refresh on its own.
+
+To refresh PRT manually, run this command from a command prompt:
+
+``` dos
+dsregcmd.exe /RefreshPrt
+```
+
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)
+- [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md)
+- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
+- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
azure-sql Winauth Azuread Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-setup.md
+
+ Title: How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)
+
+description: Learn how to set up Windows Authentication access to Azure SQL Managed Instance using Azure Active Directory and Kerberos.
+++
+ms.devlang:
++++ Last updated : 03/01/2022+++
+# How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)
+
+This article gives an overview of how to set up infrastructure and managed instances to implement [Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-overview.md).
+
+There are two phases to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory (Azure AD) and Kerberos.
+
+- **One-time infrastructure setup.**
+ - Synchronize Active Directory (AD) and Azure AD, if this hasn't already been done.
+ - Enable the modern interactive authentication flow, when available. The modern interactive flow is recommended for organizations with Azure AD joined or Hybrid AD joined clients running Windows 10 20H1 / Windows Server 2022 and higher where clients are joined to Azure AD or Hybrid AD.
+ - Set up the incoming trust-based authentication flow. This is recommended for customers who canΓÇÖt use the modern interactive flow, but who have AD joined clients running Windows 10 / Windows Server 2012 and higher.
+- **Configuration of Azure SQL Managed Instance.**
+ - Create a system assigned service principal for each managed instance.
+
+## One-time infrastructure setup
+
+The first step in infrastructure setup is to synchronize AD with Azure AD, if this hasn't already been completed.
+
+Following this, a system administrator configures authentication flows. Two authentication flows are available to implement Windows Authentication for Azure AD principals on Azure SQL Managed Instance: the incoming trust-based flow supports AD joined clients running Windows server 2012 or higher, and the modern interactive flow supports Azure AD joined clients running Windows 10 21H1 or higher.
+
+### Synchronize AD with Azure AD
+
+Customers should first implement [Azure AD Connect](/azure/active-directory/hybrid/whatis-azure-ad-connect.md) to integrate on-premises directories with Azure AD.
+
+### Select which authentication flow(s) you will implement
+
+The following diagram shows eligibility and the core functionality of the modern interactive flow and the incoming trust-based flow:
+
+"A decision tree showing that the modern interactive flow is suitable for clients running Windows 10 20H1 or Windows Server 2022 or higher, where clients are Azure AD joined or Hybrid AD joined. The incoming trust-based flow is suitable for clients running Windows 10 or Windows Server 2012 or higher where clients are AD joined."
+
+The modern interactive flow works with enlightened clients running Windows 10 21H1 and higher that are Azure AD or Hybrid Azure AD joined. In the modern interactive flow, users can access Azure SQL Managed Instance without requiring a line of sight to Domain Controllers (DCs). There is no need for a trust object to be created in the customer's AD. To enable the modern interactive flow, an administrator will set group policy for Kerberos authentication tickets (TGT) to be used during login.
+
+The incoming trust-based flow works for clients running Windows 10 or Windows Server 2012 and higher. This flow requires that clients be joined to AD and have a line of sight to AD from on-premises. In the incoming trust-based flow, a trust object is created in the customer's AD and is registered in Azure AD. To enable the incoming trust-based flow, an administrator will set up an incoming trust with Azure AD and set up Kerberos Proxy via group policy.
+
+### Modern interactive authentication flow
+
+The following prerequisites are required to implement the modern interactive authentication flow:
+
+|Prerequisite |Description |
+|||
+|Clients must run Windows 10 20H1, Windows Server 2022, or a higher version of Windows. | |
+|Clients must be joined to Azure AD or Hybrid Azure AD. | You can determine if this prerequisite is met by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Application must connect to the managed instance via an interactive session. | This supports applications such as SQL Server Management Studio (SSMS) and web applications, but won't work for applications that run as a service. |
+|Azure AD tenant. | |
+|Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
+| | |
+
+See [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md) for steps to enable this authentication flow.
+
+### Incoming trust-based authentication flow
+
+The following prerequisites are required to implement the incoming trust-based authentication flow:
+
+|Prerequisite |Description |
+|||
+|Client must run Windows 10, Windows Server 2012, or a higher version of Windows. | |
+|Clients must be joined to AD. The domain must have a functional level of Windows Server 2012 or higher. | You can determine if the client is joined to AD by running the [dsregcmd command](/azure/active-directory/devices/troubleshoot-device-dsregcmd.md): `dsregcmd.exe /status` |
+|Azure AD Hybrid Authentication Management Module. | This PowerShell module provides management features for on-premises setup. |
+|Azure tenant. | |
+|Azure subscription under the same Azure AD tenant you plan to use for authentication.| |
+|Azure AD Connect installed. | Hybrid environments where identities exist both in Azure AD and AD. |
+| | |
+
+See [How to set up Windows Authentication for Azure Active Directory with the incoming trust based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md) for instructions on enabling this authentication flow.
++
+## Configure Azure SQL Managed Instance
+
+The steps to set up Azure SQL Managed Instance are the same for both the incoming trust-based authentication flow and the modern interactive authentication flow.
+
+#### Prerequisites to configure a managed instance
+
+The following prerequisites are required to configure a managed instance for Windows Authentication for Azure AD principals:
+
+|Prerequisite | Description |
+|||
+|Az.Sql PowerShell module | This PowerShell module provides management cmdlets for Azure SQL resources. Install this module by running the following PowerShell command: `Install-Module -Name Az.Sql` |
+|Azure Active Directory PowerShell Module | This module provides management cmdlets for Azure AD administrative tasks such as user and service principal management. Install this module by running the following PowerShell command: `Install-Module ΓÇôName AzureAD` |
+| A managed instance | You may [create a new managed instance](../../azure-arc/dat) or use an existing managed instance. |
+
+#### Configure each managed instance
+
+See [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory](winauth-azuread-kerberos-managed-instance.md) for steps to configure each managed instance.
+
+## Limitations
+
+The following limitations apply to Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+### Not available for Linux clients
+
+Windows Authentication for Azure AD principals is currently supported only for client machines running Windows.
+
+### Azure AD cached logon
+
+Windows limits how often it connects to Azure AD, so there is a potential for user accounts to not have a refreshed Kerberos Ticket Granting Ticket (TGT) within 4 hours of an upgrade or fresh deployment of a client machine. User accounts who do not have a refreshed TGT results in failed ticket requests from Azure AD.
+
+As an administrator, you can trigger an online logon immediately to handle upgrade scenarios by running the following command on the client machine, then locking and unlocking the user session to get a refreshed TGT:
+
+```dos
+dsregcmd.exe /RefreshPrt
+```
+
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)
+- [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md)
+- [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md)
+- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
azure-sql Winauth Azuread Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-troubleshoot.md
+
+ Title: Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
+
+description: Learn to troubleshoot Azure Active Directory Kerberos authentication for Azure SQL Managed Instance.
+++
+ms.devlang:
++++ Last updated : 03/01/2022+++
+# Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance
+
+This article contains troubleshooting steps for use when implementing [Windows Authentication for Azure AD principals](winauth-azuread-overview.md).
+
+## Verify tickets are getting cached
+
+Use the [klist](/windows-server/administration/windows-commands/klist) command to display a list of currently cached Kerberos tickets.
+
+The `klist get krbtgt` command should return a ticket from the on-premises Active Directory realm.
+
+```dos
+klist get krbtgt/kerberos.microsoftonline.com
+```
+
+The `klist get MSSQLSvc` command should return a ticket from the `kerberos.microsoftonline.com` realm with a Service Principal Name (SPN) to `MSSQLSvc/<miname>.<dnszone>.database.windows.net:1433`.
+
+```dos
+klist get MSSQLSvc/<miname>.<dnszone>.database.windows.net:1433
+```
++
+The following are some well-known error codes:
+
+- **0x6fb: SQL SPN not found** - Check that youΓÇÖve entered a valid SPN. If you've implemented the incoming trust-based authentication flow, revisit steps to [create and configure the Azure AD Kerberos Trusted Domain Object](winauth-azuread-setup-incoming-trust-based-flow.md#create-and-configure-the-azure-ad-kerberos-trusted-domain-object) to validate that youΓÇÖve performed all the configuration steps.
+- **0x51f** - This error is likely related to a conflict with the Fiddler tool. Turn on Fiddler to mitigate the issue.
+
+## Investigate message flow failures
+
+Use Wireshark, or the network traffic analyzer of your choice, to monitor traffic between the client and on-prem Kerberos Key Distribution Center (KDC).
+
+When using Wireshark the following is expected:
+
+- AS-REQ: Client => on-prem KDC => returns on-prem TGT.
+- TGS-REQ: Client => on-prem KDC => returns referral to `kerberos.microsoftonline.com`.
+
+## Next steps
+
+Learn more about implementing Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [What is Windows Authentication for Azure Active Directory principals on Azure SQL Managed Instance? (Preview)](winauth-azuread-overview.md)
+- [How to set up Windows Authentication for Azure SQL Managed Instance using Azure Active Directory and Kerberos (Preview)](winauth-azuread-setup.md)
+- [How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)](winauth-implementation-aad-kerberos.md)
+- [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md)
+- [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md)
azure-sql Winauth Implementation Aad Kerberos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-implementation-aad-kerberos.md
+
+ Title: How Windows Authentication for Azure SQL Managed Instance is implemented with Azure AD and Kerberos (Preview)
+
+description: Learn how Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory (Azure AD) and Kerberos.
+++
+ms.devlang:
++++ Last updated : 03/01/2022++
+# How Windows Authentication for Azure SQL Managed Instance is implemented with Azure Active Directory and Kerberos (Preview)
+
+[Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-overview.md) enables customers to move existing services to the cloud while maintaining a seamless user experience and provides the basis for security infrastructure modernization. To enable Windows Authentication for Azure Active Directory (Azure AD) principals, you will turn your Azure AD tenant into an independent Kerberos realm and create an incoming trust in the customer domain.
+
+This configuration allows users in the customer domain to access resources in your Azure AD tenant. It will not allow users in the Azure AD tenant to access resources in the customer domain.
+
+The following diagram gives an overview of how Windows Authentication is implemented for a managed instance using Azure AD and Kerberos:
+++
+## How Azure AD provides Kerberos authentication
+
+To create an independent Kerberos realm for an Azure AD tenant, customers install the Azure AD Hybrid Authentication Management PowerShell module on any Windows server and run a cmdlet to create an Azure AD Kerberos object in their cloud and Active Directory. Trust created in this way enables existing Windows clients to access Azure AD with Kerberos.
+
+Windows 10 21H1 clients and above have been enlightened for interactive mode and do not need configuration for interactive login flows to work. Clients running previous versions of Windows can be configured to use Kerberos Key Distribution Center (KDC) proxy servers to use Kerberos authentication.
+
+Kerberos authentication in Azure AD enables:
+
+- Traditional on-premises applications to move to the cloud without changing their fundamental authentication scheme.
+
+- Applications running on enlightened clients authenticate using Azure AD directly.
++
+## How Azure SQL Managed Instance works with Azure AD and Kerberos
+
+Customers use the Azure portal to enable a system assigned service principal on each managed instance. The service principal allows managed instance users to authenticate using the Kerberos protocol.
+
+## Next steps
+
+Learn more about enabling Windows Authentication for Azure AD principals on Azure SQL Managed Instance:
+
+- [How to set up Windows Authentication for Azure Active Directory with the modern interactive flow (Preview)](winauth-azuread-setup-modern-interactive-flow.md)
+- [How to set up Windows Authentication for Azure AD with the incoming trust-based flow (Preview)](winauth-azuread-setup-incoming-trust-based-flow.md)
+- [Configure Azure SQL Managed Instance for Windows Authentication for Azure Active Directory (Preview)](winauth-azuread-kerberos-managed-instance.md)
+- [Troubleshoot Windows Authentication for Azure AD principals on Azure SQL Managed Instance](winauth-azuread-troubleshoot.md)
azure-sql Doc Changes Updates Release Notes Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/doc-changes-updates-release-notes-whats-new.md
vm-windows-sql-server Previously updated : 01/08/2022 Last updated : 03/02/2022 # Documentation changes for SQL Server on Azure Virtual Machines
When you deploy an Azure virtual machine (VM) with SQL Server installed on it, either manually, or through a built-in image, you can leverage Azure features to improve your experience. This article summarizes the documentation changes associated with new features and improvements in the recent releases of [SQL Server on Azure Virtual Machines (VMs)](https://azure.microsoft.com/services/virtual-machines/sql-server/). To learn more about SQL Server on Azure VMs, see the [overview](sql-server-on-azure-vm-iaas-what-is-overview.md).
+## March 2022
+
+| Changes | Details |
+| | |
+| **Security best practices** | The [SQL Server VM security best practices](security-considerations-best-practices.md) have been rewritten and refreshed! |
+| &nbsp; | &nbsp; |
++ ## January 2022 | Changes | Details |
azure-sql Manage Sql Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/manage-sql-vm-portal.md
Use the **Defender for SQL** page of the SQL virtual machine's resource to view
![Configure SQL Server Defender for Cloud settings in the Azure portal using the SQL virtual machines resource](./media/manage-sql-vm-portal/sql-vm-security-center.png)
-## SQL Assessment (Preview)
+## SQL best practices assessment
-Use the **SQL Assessment** page of the SQL virtual machines resource to assess the health of your SQL Server VM. Once the feature is enabled, your SQL Server instances and databases are scanned and recommendations are surfaced to improve performance (indexes, statistics, trace flags, and so on) and identify missing best practices configurations. SQL Assessment is currently in preview.
+Use the **SQL best practices assessment** page of the SQL virtual machines resource to assess the health of your SQL Server VM. Once the feature is enabled, your SQL Server instances and databases are scanned and recommendations are surfaced to improve performance (indexes, statistics, trace flags, and so on) and identify missing best practices configurations.
-
-To learn more, see [SQL Assessment for SQL Server on Azure VMs](sql-assessment-for-sql-vm.md).
+To learn more, see [SQL best practices assessment for SQL Server on Azure VMs](sql-assessment-for-sql-vm.md).
## Next steps
azure-sql Performance Guidelines Best Practices Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-checklist.md
For your SQL Server availability group or failover cluster instance, consider th
To learn more, see the comprehensive [HADR best practices](hadr-cluster-best-practices.md).
+## Security
+
+The checklist in this section covers the [security best practices](security-considerations-best-practices.md) for SQL Server on Azure VMs.
+
+SQL Server features and capabilities provide a method of security at the data level and is how you achieve [defense-in-depth](https://azure.microsoft.com/resources/videos/defense-in-depth-security-in-azure/) at the infrastructure level for cloud-based and hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your sensitive data, protect virtual machines from viruses and malware, secure network traffic, identify and detect threats, meet compliance requirements, and provides a single method for administration and reporting for any security need in the hybrid cloud.
+
+- Use [Azure Security Center](../../../defender-for-cloud/defender-for-cloud-introduction.md) to evaluate and take action to improve the security posture of your data environment. Capabilities such as [Azure Advanced Threat Protection (ATP)](../../database/threat-detection-overview.md) can be leveraged across your hybrid workloads to improve security evaluation and give the ability to react to risks. Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md) surfaces Azure Security Center assessments within the [SQL virtual machine resource](manage-sql-vm-portal.md) of the Azure portal.
+- Leverage [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) to discover and mitigate potential database vulnerabilities, as well as detect anomalous activities that could indicate a threat to your SQL Server instance and database layer.
+- [Vulnerability Assessment](../../database/sql-vulnerability-assessment.md) is a part of [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) that can discover and help remediate potential risks to your SQL Server environment. It provides visibility into your security state, and includes actionable steps to resolve security issues.
+- [Azure Advisor](../../../advisor/advisor-security-recommendations.md) analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of your Azure resources.. Leverage Azure Advisor at the virtual machine, resource group, or subscription level to help identify and apply best practices to optimize your Azure deployments.
+- Use [Azure Disk Encryption](../../../virtual-machines/windows/disk-encryption-windows.md) when your compliance and security needs require you to encrypt the data end-to-end using your encryption keys, including encryption of the ephemeral (locally attached temporary) disk.
+- [Managed Disks are encrypted](../../../virtual-machines/disk-encryption.md) at rest by default using Azure Storage Service Encryption, where the encryption keys are Microsoft-managed keys stored in Azure.
+- For a comparison of the managed disk encryption options review the [managed disk encryption comparison chart](../../../virtual-machines/disk-encryption-overview.md#comparison)
+- Management ports should be closed on your virtual machines - Open remote management ports expose your VM to a high level of risk from internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine.
+- Turn on [Just-in-time (JIT) access](../../../defender-for-cloud/just-in-time-access-usage.md) for Azure virtual machines
+- Use [Azure Bastion](../../../bastion/bastion-overview.md) over Remote Desktop Protocol (RDP).
+- Lock down ports and only allow the necessary application traffic using [Azure Firewall](../../../firewall/features.md) which is a managed Firewall as a Service (FaaS) that grants/ denies server access based on the originating IP address.
+- Use [Network Security Groups (NSGs)](../../../virtual-network/network-security-groups-overview.md) to filter network traffic to, and from, Azure resources on Azure Virtual Networks
+- Leverage [Application Security Groups](../../../virtual-network/application-security-groups.md) to group servers together with similar port filtering requirements, with similar functions, such as web servers and database servers.
+- For web and application servers leverage [Azure Distributed Denial of Service (DDoS) protection](../../../ddos-protection/ddos-protection-overview.md). DDoS attacks are designed to overwhelm and exhaust network resources, making apps slow or unresponsive. It is common for DDos attacks to target user interfaces. Azure DDoS protection sanitizes unwanted network traffic, before it impacts service availability
+- Leverage VM extensions to help address anti-malware, desired state, threat detection, prevention, and remediation to address threats at the operating system, machine, and network levels:
+ - [Guest Configuration extension](../../../virtual-machines/extensions/guest-configuration.md) performs audit and configuration operations inside virtual machines.
+ - [Network Watcher Agent virtual machine extension for Windows and Linux](../../../virtual-machines/extensions/network-watcher-windows.md) monitors network performance, diagnostic, and analytics service that allows monitoring of Azure networks.
+ - [Microsoft Antimalware Extension for Windows](../../../virtual-machines/extensions/iaas-antimalware-windows.md) to help identify and remove viruses, spyware, and other malicious software, with configurable alerts.
+ - [Evaluate 3rd party extensions](../../../virtual-machines/extensions/overview.md) such as Symantec Endpoint Protection for Windows VM (../../../virtual-machines/extensions/symantec)
+- Leverage [Azure Policy](../../../governance/policy/overview.md) to create business rules that can be applied to your environment. Azure Policies evaluate Azure resources by comparing the properties of those resources against rules defined in JSON format.
+- Azure Blueprints enables cloud architects and central information technology groups to define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Azure Blueprints are [different than Azure Policies](../../../governance/blueprints/overview.md#how-its-different-from-azure-policy).
++++ ## Next steps
-To learn more, see the other articles in this series:
+To learn more, see the other articles in this best practices series:
- [VM size](performance-guidelines-best-practices-vm-size.md) - [Storage](performance-guidelines-best-practices-storage.md)
To learn more, see the other articles in this series:
- [HADR settings](hadr-cluster-best-practices.md) - [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
-For security best practices, see [Security considerations for SQL Server on Azure Virtual Machines](security-considerations-best-practices.md).
- Consider enabling [SQL Assessment for SQL Server on Azure VMs](sql-assessment-for-sql-vm.md). Review other SQL Server Virtual Machine articles at [SQL Server on Azure Virtual Machines Overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
azure-sql Performance Guidelines Best Practices Collect Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-collect-baseline.md
The following PerfMon counters can help validate the compute health of a SQL Ser
## Next steps
-To learn more, see the other articles in this series:
+To learn more, see the other articles in this best practices series:
+ - [Quick checklist](performance-guidelines-best-practices-checklist.md) - [VM size](performance-guidelines-best-practices-vm-size.md) - [Storage](performance-guidelines-best-practices-storage.md)
azure-sql Performance Guidelines Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-storage.md
There are specific Azure Monitor metrics that are invaluable for discovering cap
## Next steps
-To learn more about performance best practices, see the other articles in this series:
+To learn more, see the other articles in this best practices series:
+ - [Quick checklist](performance-guidelines-best-practices-checklist.md) - [VM size](performance-guidelines-best-practices-vm-size.md) - [Security](security-considerations-best-practices.md)
azure-sql Performance Guidelines Best Practices Vm Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/performance-guidelines-best-practices-vm-size.md
For example, the [M64-32ms](../../../virtual-machines/constrained-vcpu.md) requi
## Next steps
-To learn more, see the other articles in this series:
+To learn more, see the other articles in this best practices series:
+ - [Quick checklist](performance-guidelines-best-practices-checklist.md) - [Storage](performance-guidelines-best-practices-storage.md) - [Security](security-considerations-best-practices.md)
azure-sql Security Considerations Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/security-considerations-best-practices.md
Title: Security considerations | Microsoft Docs
+ Title: "Security: Best practices"
description: This topic provides general guidance for securing SQL Server running in an Azure virtual machine. documentationcenter: na
vm-windows-sql-server Previously updated : 05/30/2021 Last updated : 03/02/2022
This topic includes overall security guidelines that help establish secure acces
Azure complies with several industry regulations and standards that can enable you to build a compliant solution with SQL Server running in a virtual machine. For information about regulatory compliance with Azure, see [Azure Trust Center](https://azure.microsoft.com/support/trust-center/).
-In addition to the practices described in this topic, we recommend that you review and implement the security best practices from both traditional on-premises security practices, as well as virtual machine security best practices.
+First review the security best practices for [SQL Server](/sql/relational-databases/security/sql-server-security-best-practices) and [Azure VMs](../../../virtual-machines/security-recommendations.md) and then review this article for the best practices that apply to SQL Server on Azure VMs specifically.
+
+To learn more about SQL Server VM best practices, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [VM size](performance-guidelines-best-practices-vm-size.md), [HADR configuration](hadr-cluster-best-practices.md), and [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
+
+## Checklist
++
+Review the following checklist in this section for a brief overview of the security best practices that the rest of the article covers in greater detail.
+
+SQL Server features and capabilities provide a method of security at the data level and is how you achieve [defense-in-depth](https://azure.microsoft.com/resources/videos/defense-in-depth-security-in-azure/) at the infrastructure level for cloud-based and hybrid solutions. In addition, with Azure security measures, it is possible to encrypt your sensitive data, protect virtual machines from viruses and malware, secure network traffic, identify and detect threats, meet compliance requirements, and provides a single method for administration and reporting for any security need in the hybrid cloud.
+
+- Use [Azure Security Center](../../../defender-for-cloud/defender-for-cloud-introduction.md) to evaluate and take action to improve the security posture of your data environment. Capabilities such as [Azure Advanced Threat Protection (ATP)](../../database/threat-detection-overview.md) can be leveraged across your hybrid workloads to improve security evaluation and give the ability to react to risks. Registering your SQL Server VM with the [SQL IaaS Agent extension](sql-agent-extension-manually-register-single-vm.md) surfaces Azure Security Center assessments within the [SQL virtual machine resource](manage-sql-vm-portal.md) of the Azure portal.
+- Leverage [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) to discover and mitigate potential database vulnerabilities, as well as detect anomalous activities that could indicate a threat to your SQL Server instance and database layer.
+- [Vulnerability Assessment](../../database/sql-vulnerability-assessment.md) is a part of [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) that can discover and help remediate potential risks to your SQL Server environment. It provides visibility into your security state, and includes actionable steps to resolve security issues.
+- [Azure Advisor](../../../advisor/advisor-security-recommendations.md) analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of your Azure resources.. Leverage Azure Advisor at the virtual machine, resource group, or subscription level to help identify and apply best practices to optimize your Azure deployments.
+- Use [Azure Disk Encryption](../../../virtual-machines/windows/disk-encryption-windows.md) when your compliance and security needs require you to encrypt the data end-to-end using your encryption keys, including encryption of the ephemeral (locally attached temporary) disk.
+- [Managed Disks are encrypted](../../../virtual-machines/disk-encryption.md) at rest by default using Azure Storage Service Encryption, where the encryption keys are Microsoft-managed keys stored in Azure.
+- For a comparison of the managed disk encryption options review the [managed disk encryption comparison chart](../../../virtual-machines/disk-encryption-overview.md#comparison)
+- Management ports should be closed on your virtual machines - Open remote management ports expose your VM to a high level of risk from internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine.
+- Turn on [Just-in-time (JIT) access](../../../defender-for-cloud/just-in-time-access-usage.md) for Azure virtual machines
+- Use [Azure Bastion](../../../bastion/bastion-overview.md) over Remote Desktop Protocol (RDP).
+- Lock down ports and only allow the necessary application traffic using [Azure Firewall](../../../firewall/features.md) which is a managed Firewall as a Service (FaaS) that grants/ denies server access based on the originating IP address.
+- Use [Network Security Groups (NSGs)](../../../virtual-network/network-security-groups-overview.md) to filter network traffic to, and from, Azure resources on Azure Virtual Networks
+- Leverage [Application Security Groups](../../../virtual-network/application-security-groups.md) to group servers together with similar port filtering requirements, with similar functions, such as web servers and database servers.
+- For web and application servers leverage [Azure Distributed Denial of Service (DDoS) protection](../../../ddos-protection/ddos-protection-overview.md). DDoS attacks are designed to overwhelm and exhaust network resources, making apps slow or unresponsive. It is common for DDos attacks to target user interfaces. Azure DDoS protection sanitizes unwanted network traffic, before it impacts service availability
+- Leverage VM extensions to help address anti-malware, desired state, threat detection, prevention, and remediation to address threats at the operating system, machine, and network levels:
+ - [Guest Configuration extension](../../../virtual-machines/extensions/guest-configuration.md) performs audit and configuration operations inside virtual machines.
+ - [Network Watcher Agent virtual machine extension for Windows and Linux](../../../virtual-machines/extensions/network-watcher-windows.md) monitors network performance, diagnostic, and analytics service that allows monitoring of Azure networks.
+ - [Microsoft Antimalware Extension for Windows](../../../virtual-machines/extensions/iaas-antimalware-windows.md) to help identify and remove viruses, spyware, and other malicious software, with configurable alerts.
+ - [Evaluate 3rd party extensions](../../../virtual-machines/extensions/overview.md) such as Symantec Endpoint Protection for Windows VM (../../../virtual-machines/extensions/symantec)
+- Leverage [Azure Policy](../../../governance/policy/overview.md) to create business rules that can be applied to your environment. Azure Policies evaluate Azure resources by comparing the properties of those resources against rules defined in JSON format.
+- Azure Blueprints enables cloud architects and central information technology groups to define a repeatable set of Azure resources that implements and adheres to an organization's standards, patterns, and requirements. Azure Blueprints are [different than Azure Policies](../../../governance/blueprints/overview.md#how-its-different-from-azure-policy).
## Microsoft Defender for SQL
-[Microsoft Defender for SQL](../../../security-center/defender-for-sql-introduction.md) enables Microsoft Defender for Cloud security features such as vulnerability assessments and security alerts. See [enable Microsoft Defender for SQL](../../../security-center/defender-for-sql-usage.md) to learn more.
+[Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-introduction.md) enables Azure Security Center security features such as [vulnerability assessments](../../database/sql-vulnerability-assessment.md) and security alerts. See [enable Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-usage.md) to learn more.
+
+Use Azure Defender for SQL to discover and mitigate potential database vulnerabilities, and detect anomalous activities that may indicate a threat to your SQL Server instance and database layer. [Vulnerability Assessments](../../database/sql-vulnerability-assessment.md) are a feature of Microsoft Defender for SQL that can discover and help remediate potential risks to your SQL Server environment. It provides visibility into your security state, and it includes actionable steps to resolve security issues. Registering your SQL Server VM with the [SQL Server IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md) surfaces Microsoft Defender for SQL recommendations to the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal.
+ ## Portal management After you've [registered your SQL Server VM with the SQL IaaS extension](sql-agent-extension-manually-register-single-vm.md), you can configure a number of security settings using the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal, such as enabling Azure Key Vault integration, or SQL authentication.
-Additionally, after you've enabled [Microsoft Defender for SQL](../../../security-center/defender-for-sql-usage.md) you can view Defender for Cloud features directly within the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal, such as vulnerability assessments and security alerts.
+Additionally, after you've enabled [Microsoft Defender for SQL](../../../defender-for-cloud/defender-for-sql-usage.md) you can view Defender for Cloud features directly within the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal, such as vulnerability assessments and security alerts.
See [manage SQL Server VM in the portal](manage-sql-vm-portal.md) to learn more.
-## Azure Key Vault integration
+## Azure Security Center
-There are multiple SQL Server encryption features, such as transparent data encryption (TDE), column level encryption (CLE), and backup encryption. These forms of encryption require you to manage and store the cryptographic keys you use for encryption. The Azure Key Vault service is designed to improve the security and management of these keys in a secure and highly available location. The SQL Server Connector enables SQL Server to use these keys from Azure Key Vault.
-For comprehensive details, see the other articles in this series: [Checklist](performance-guidelines-best-practices-checklist.md), [VM size](performance-guidelines-best-practices-vm-size.md), [Storage](performance-guidelines-best-practices-storage.md), [HADR configuration](hadr-cluster-best-practices.md), [Collect baseline](performance-guidelines-best-practices-collect-baseline.md).
+[Azure Security Center](../../../defender-for-cloud/defender-for-cloud-introduction.md) is a unified security management system that is designed to evaluate and provide opportunities to improve the security posture of your data environment. The Azure Security Center grants a consolidated view of the security health for all assets in the hybrid cloud.
-See [Azure Key Vault integration](azure-key-vault-integration-configure.md) to learn more.
+- Use [security score](../../../defender-for-cloud/secure-score-security-controls.md) in Azure Security Center.
+- Review the list of the [compute](../../../defender-for-cloud/recommendations-reference.md#compute-recommendations) and [data recommendations](../../../security-center/recommendations-reference.md#data-recommendations) currently available, for further details.
+- Registering your SQL Server VM with the [SQL Server IaaS Agent Extension](sql-agent-extension-manually-register-single-vm.md) surfaces Azure Security Center recommendations to the [SQL virtual machines resource](manage-sql-vm-portal.md) in the Azure portal.
+## Azure Advisor
+
+[Azure Advisor](../../../advisor/advisor-security-recommendations.md) is a personalized cloud consultant that helps you follow best practices to optimize your Azure deployments. Azure Advisor analyzes your resource configuration and usage telemetry and then recommends solutions that can help you improve the cost effectiveness, performance, high availability, and security of your Azure resources. Azure Advisor can evaluate at the virtual machine, resource group, or subscription level.
-## Access control
+## Azure Key Vault integration
+
+There are multiple SQL Server encryption features, such as transparent data encryption (TDE), column level encryption (CLE), and backup encryption. These forms of encryption require you to manage and store the cryptographic keys you use for encryption. The [Azure Key Vault](azure-key-vault-integration-configure.md) service is designed to improve the security and management of these keys in a secure and highly available location. The SQL Server Connector allows SQL Server to use these keys from Azure Key Vault.
-When you create your SQL Server virtual machine, consider how to carefully control who has access to the machine and to SQL Server. In general, you should do the following:
+Consider the following:
-- Restrict access to SQL Server to only the applications and clients that need it.-- Follow best practices for managing user accounts and passwords.
+ - Azure Key Vault stores application secrets in a centralized cloud location to securely control access permissions, and separate access logging.
+ - When bringing your own keys to Azure it is recommended to store secrets and certificates in the [Azure Key Vault](/sql/relational-databases/security/encryption/extensible-key-management-using-azure-key-vault-sql-server).
+ - Azure Disk Encryption uses [Azure Key Vault](../../../virtual-machines/windows/disk-encryption-key-vault.md) to control and manage disk encryption keys and secrets.
-The following sections provide suggestions on thinking through these points.
-## Secure connections
+## Access control
-When you create a SQL Server virtual machine with a gallery image, the **SQL Server Connectivity** option gives you the choice of **Local (inside VM)**, **Private (within Virtual Network)**, or **Public (Internet)**.
+When you create a SQL Server virtual machine with an Azure gallery image, the **SQL Server Connectivity** option gives you the choice of **Local (inside VM)**, **Private (within Virtual Network)**, or **Public (Internet)**.
![SQL Server connectivity](./media/security-considerations-best-practices/sql-vm-connectivity-option.png)
In addition to NSG rules to restrict network traffic, you can also use the Windo
If you are using endpoints with the classic deployment model, remove any endpoints on the virtual machine if you do not use them. For instructions on using ACLs with endpoints, see [Manage the ACL on an endpoint](/previous-versions/azure/virtual-machines/windows/classic/setup-endpoints#manage-the-acl-on-an-endpoint). This is not necessary for VMs that use the Azure Resource Manager.
-Finally, consider enabling encrypted connections for the instance of the SQL Server Database Engine in your Azure virtual machine. Configure SQL server instance with a signed certificate. For more information, see [Enable Encrypted Connections to the Database Engine](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) and [Connection String Syntax](/dotnet/framework/data/adonet/connection-string-syntax).
+Consider enabling [encrypted connections](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) for the instance of the SQL Server Database Engine in your Azure virtual machine. Configure SQL server instance with a signed certificate. For more information, see [Enable Encrypted Connections to the Database Engine](/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine) and [Connection String Syntax](/dotnet/framework/data/adonet/connection-string-syntax).
-## Encryption
-
-Managed disks offer Server-Side Encryption, and Azure Disk Encryption. [Server-Side Encryption](../../../virtual-machines/disk-encryption.md) provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) uses either BitLocker or DM-Crypt technology, and integrates with Azure Key Vault to encrypt both the OS and data disks.
-
-## Non-default port
+Consider the following when **securing the network connectivity or perimeter**:
-By default, SQL Server listens on a well-known port, 1433. For increased security, configure SQL Server to listen on a non-default port, such as 1401. If you provision a SQL Server gallery image in the Azure portal, you can specify this port in the **SQL Server settings** blade.
+- [Azure Firewall](../../../firewall/features.md) - A stateful, managed, Firewall as a Service (FaaS) that grants/ denies server access based on originating IP address, to protect network resources.
+- [Azure Distributed Denial of Service (DDoS) protection](../../../ddos-protection/ddos-protection-overview.md) - DDoS attacks overwhelm and exhaust network resources, making apps slow or unresponsive. Azure DDoS protection sanitizes unwanted network traffic before it impacts service availability.
+- [Network Security Groups (NSGs)](../../../virtual-network/network-security-groups-overview.md) - Filters network traffic to, and from, Azure resources on Azure Virtual Networks
+- [Application Security Groups](../../../virtual-network/application-security-groups.md) - Provides for the grouping of servers with similar port filtering requirements, and group together servers with similar functions, such as web servers.
-To configure this after provisioning, you have two options:
--- For Resource Manager VMs, you can select **Security** from the [SQL virtual machines resource](manage-sql-vm-portal.md#access-the-resource). This provides an option to change the port.-
- ![TCP port change in portal](./media/security-considerations-best-practices/sql-vm-change-tcp-port.png)
+## Encryption
-- For Classic VMs or for SQL Server VMs that were not provisioned with the portal, you can manually configure the port by connecting remotely to the VM. For the configuration steps, see [Configure a Server to Listen on a Specific TCP Port](/sql/database-engine/configure-windows/configure-a-server-to-listen-on-a-specific-tcp-port). If you use this manual technique, you also need to add a Windows Firewall rule to allow incoming traffic on that TCP port.
+Managed disks offer server-side encryption, and Azure Disk Encryption. [Server-side encryption](../../../virtual-machines/disk-encryption.md) provides encryption-at-rest and safeguards your data to meet your organizational security and compliance commitments. [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) uses either BitLocker or DM-Crypt technology, and integrates with Azure Key Vault to encrypt both the OS and data disks.
-> [!IMPORTANT]
-> Specifying a non-default port is a good idea if your SQL Server port is open to public internet connections.
+Consider the following:
-When SQL Server is listening on a non-default port, you must specify the port when you connect. For example, consider a scenario where the server IP address is 13.55.255.255 and SQL Server is listening on port 1401. To connect to SQL Server, you would specify `13.55.255.255,1401` in the connection string.
+- [Azure Disk Encryption](../../../virtual-machines/windows/disk-encryption-overview.md) - Encrypts virtual machine disks using Azure Disk Encryption both for Windows and Linux virtual machines.
+ - When your compliance and security requirements require you to encrypt the data end-to-end using your encryption keys, including encryption of the ephemeral (locally attached temporary) disk, use
+[Azure disk encryption](../../../virtual-machines/windows/disk-encryption-windows.md).
+ - Azure Disk Encryption (ADE) leverages the industry-standard BitLocker feature of Windows and the DM-Crypt feature of Linux to
+provide OS and data disk encryption.
+- Managed Disk Encryption
+ - [Managed Disks are encrypted](../../../virtual-machines/disk-encryption.md) at rest by default using Azure Storage Service Encryption where the encryption keys are Microsoft managed keys stored in Azure.
+ - Data in Azure managed disks is encrypted transparently using 256-bit AES encryption, one of the strongest block ciphers available, and is FIPS 140-2 compliant.
+- For a comparison of the managed disk encryption options review the [managed disk encryption comparison chart](../../../virtual-machines/disk-encryption-overview.md#comparison).
## Manage accounts
You don't want attackers to easily guess account names or passwords. Use the fol
- If you must use the **SA** login, enable the login after provisioning and assign a new strong password.
+## Auditing and reporting
+
+[Auditing with Log Analytics](../../../azure-monitor/agents/data-sources-windows-events.md#configuring-windows-event-logs) documents events and writes to an audit log in a secure Azure BLOB storage account. Log Analytics can be used to decipher the details of the audit logs. Auditing gives you the ability to save data to a separate storage account and create an audit trail of all events you select. You can also leverage Power BI against the audit log for quick analytics of and insights about your data, as well as to provide a view for regulatory compliance. To learn more about auditing at the VM and Azure levels, see [Azure security logging and auditing](../../../security/fundamentals/log-audit.md).
+
+## Virtual Machine level access
+
+Close management ports on your machine - Open remote management ports are exposing your VM to a high level of risk from internet-based attacks. These attacks attempt to brute force credentials to gain admin access to the machine.
+- Turn on [Just-in-time (JIT) access](../../../security-center/security-center-just-in-time.md?tabs=jit-config-asc%2Cjit-request-asc) for Azure virtual machines.
+- Leverage [Azure Bastion](../../../bastion/bastion-overview.md) over Remote Desktop Protocol (RDP).
++
+## Virtual Machine extensions
+
+Azure Virtual Machine extensions are trusted Microsoft or 3rd party extensions that can help address specific needs and risks such as antivirus, malware, threat protection, and more.
+- [Guest Configuration extension](../../../virtual-machines/extensions/guest-configuration.md)
+ - To ensure secure configurations of in-guest settings of your machine, install the Guest Configuration extension.
+ - In-guest settings include the configuration of the operating system, application configuration or presence, and environment settings.
+ - Once installed, in-guest policies will be available such as 'Windows Exploit guard should be enabled'.
+- [Network traffic data collection agent](../../../virtual-machines/extensions/network-watcher-windows.md)
+ - Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines.
+ - This agent enables advanced network protection features such as traffic visualization on the network map, network hardening recommendations, and specific network threats.
+- [Evaluate extensions](../../../virtual-machines/extensions/overview.md) from Microsoft and 3rd parties to address anti-malware, desired state, threat detection, prevention, and remediation to address threats at the operating system, machine, and network levels.
## Next steps
-If you are also interested in best practices around performance, see [Performance Best Practices for SQL Server on Azure Virtual Machines](./performance-guidelines-best-practices-checklist.md).
+Review the security best practices for [SQL Server](/sql/relational-databases/security/) and [Azure VMs](../../../virtual-machines/security-recommendations.md) and then review this article for the best practices that apply to SQL Server on Azure VMs specifically.
For other topics related to running SQL Server in Azure VMs, see [SQL Server on Azure Virtual Machines overview](sql-server-on-azure-vm-iaas-what-is-overview.md). If you have questions about SQL Server virtual machines, see the [Frequently Asked Questions](frequently-asked-questions-faq.yml).
-To learn more, see the other articles in this series:
+To learn more, see the other articles in this best practices series:
- [Quick checklist](performance-guidelines-best-practices-checklist.md) - [VM size](performance-guidelines-best-practices-vm-size.md) - [Storage](performance-guidelines-best-practices-storage.md)-- [Security](security-considerations-best-practices.md) - [HADR settings](hadr-cluster-best-practices.md) - [Collect baseline](performance-guidelines-best-practices-collect-baseline.md)
azure-sql Sql Assessment For Sql Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-assessment-for-sql-vm.md
The SQL best practices assessment feature of the Azure portal identifies possibl
To learn more, watch this video on [SQL best practices assessment](/shows/Data-Exposed/?WT.mc_id=dataexposed-c9-niner):
-<iframe src="https://aka.ms/docs/player?id=13b2bf63-485c-4ec2-ab14-a1217734ad9f" width="640" height="370" style="border: 0; max-width: 100%; min-width: 100%;"></iframe>
+<iframe src="https://aka.ms/docs/player?id=13b2bf63-485c-4ec2-ab14-a1217734ad9f" width="640" height="370"></iframe>
azure-sql Sql Server Iaas Agent Extension Automate Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/virtual-machines/windows/sql-server-iaas-agent-extension-automate-management.md
The following table details these benefits:
| **Flexible licensing** | Save on cost by [seamlessly transitioning](licensing-model-azure-hybrid-benefit-ahb-change.md) from the bring-your-own-license (also known as the Azure Hybrid Benefit) to the pay-as-you-go licensing model and back again. <br/> Management mode: Lightweight & full| | **Flexible version / edition** | If you decide to change the [version](change-sql-server-version.md) or [edition](change-sql-server-edition.md) of SQL Server, you can update the metadata within the Azure portal without having to redeploy the entire SQL Server VM. <br/> Management mode: Lightweight & full| | **Defender for Cloud portal integration** | If you've enabled [Microsoft Defender for SQL](../../../security-center/defender-for-sql-usage.md), then you can view Defender for Cloud recommendations directly in the [SQL virtual machines](manage-sql-vm-portal.md) resource of the Azure portal. See [Security best practices](security-considerations-best-practices.md) to learn more. <br/> Management mode: Lightweight & full|
-| **SQL Assessment (Preview)** | Enables you to assess the health of your SQL Server VMs using configuration best practices. For more information, see [SQL Assessment](sql-assessment-for-sql-vm.md). <br/> Management mode: Full|
+| **SQL best practices assessment** | Enables you to assess the health of your SQL Server VMs using configuration best practices. For more information, see [SQL best practices assessment](sql-assessment-for-sql-vm.md). <br/> Management mode: Full|
## Management modes
azure-vmware Configure Vmware Hcx https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/configure-vmware-hcx.md
For an end-to-end overview of this procedure, view the [Azure VMware Solution: C
## Create a service mesh >[!IMPORTANT]
->Make sure ports UDP 500/4500 are open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses.
+>Make sure port UDP 500 is open between your on-premises VMware HCX Connector 'uplink' network profile addresses and the Azure VMware Solution HCX Cloud 'uplink' network profile addresses. (4500 UDP was previously required in legacy versions of HCX. See https://ports.vmware.com for latest information)
1. Under **Infrastructure**, select **Interconnect** > **Service Mesh** > **Create Service Mesh**.
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy (in preview) description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 02/11/2022 Last updated : 02/18/2022
Follow these steps:
- **Policy sub-type**: Select **Enhanced** type. By default, the policy type is set to **Standard**.
- :::image type="content" source="./media/backup-azure-vms-enhanced-policy/select-enhanced-backup-policy-sub-type.png" alt-text="Screenshot showing to select backup policies sub-type as enhanced.":::
+ :::image type="content" source="./media/backup-azure-vms-enhanced-policy/select-enhanced-backup-policy-sub-type.png" alt-text="Screenshot showing to select backup policies subtype as enhanced.":::
- **Backup schedule**: You can select frequency as **Hourly**/Daily/Weekly.
- By default, enhanced backup schedule is set to **Hourly**, with **8 AM** as start time, **Every 4 hours** as schedule, and **24 Hours** as duration. You can choose to modify the settings as needed.
+ With backup schedule set to **Hourly**, the default selection for start time is **8 AM**, schedule is **Every 4 hours**, and duration is **24 Hours**. Hourly backup has a minimum RPO of 4 hours and a maximum of 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively.
Note that Hourly backup frequency is in preview. To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 02/11/2022 Last updated : 02/18/2022
Monthly/yearly backup| Not supported when backing up with Azure VM extension. On
Automatic clock adjustment | Not supported.<br/><br/> Azure Backup doesn't automatically adjust for daylight saving time changes when backing up a VM.<br/><br/> Modify the policy manually as needed. [Security features for hybrid backup](./backup-azure-security-feature.md) |Disabling security features isn't supported. Back up the VM whose machine time is changed | Not supported.<br/><br/> If the machine time is changed to a future date-time after enabling backup for that VM, however even if the time change is reverted, successful backup isn't guaranteed.
-Multiple Backups Per Day | Supported, using _Enhanced policy_ (in preview). To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
+Multiple Backups Per Day | Supported, using _Enhanced policy_ (in preview). To enroll your subscription for this feature, write to us at [askazurebackupteam@microsoft.com](mailto:askazurebackupteam@microsoft.com). <br><br> For hourly backup, the minimum RPO is 4 hours and the maximum is 24 hours. You can set the backup schedule to 4, 6, 8, 12, and 24 hours respectively. Learn about how to [back up an Azure VM using Enhanced policy](backup-azure-vms-enhanced-policy.md).
## Operating system support (Windows)
Backup of Azure VMs with locks | Unsupported for unmanaged VMs. <br><br> Support
Windows Storage Spaces configuration of standalone Azure VMs | Supported [Azure Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for flexible orchestration model to back up and restore Single Azure VM. Restore with Managed identities | Yes, supported for managed Azure VMs, and not supported for classic and unmanaged Azure VMs. <br><br> Cross Region Restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).
-<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Currently, you can restore as [Create VM](./backup-azure-arm-restore-vms.md#create-a-vm), or [Restore disk](./backup-azure-arm-restore-vms.md#restore-disks) only. </li><li> Backup is supported in all regions where Trusted Launch VM is available. </li></ul>
+<a name="tvm-backup">Trusted Launch VM</a> | Backup supported (in preview) <br><br> Backup of Trusted Launch VM is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through [Recovery Services vault](./backup-azure-arm-vms-prepare.md), [VM Manage blade](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and [Create VM blade](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br> <ul><li> Backup is supported in all regions where Trusted Launch VM is available. </li><li> Configurations of Backup, Alerts, and Monitoring for Trusted Launch VM are currently not supported through Backup center. </li><li> Migration of an existing [Generation 2](../virtual-machines/generation-2.md) VM (protected with Azure Backup) to Trusted Launch VM is currently not supported. Learn about how to [create a Trusted Launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-vm). </li></ul>
## VM storage support
backup Manage Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-file-share-rest-api.md
Title: Manage Azure File share backup with Rest API
+ Title: Manage Azure File share backup with REST API
description: Learn how to use REST API to manage and monitor Azure file shares that are backed up by Azure Backup. Last updated 02/17/2020
backup Quick Backup Vm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-template.md
New-AzResourceGroup -Name $resourceGroupName -Location $location
New-AzResourceGroupDeployment -ResourceGroupName $resourceGroupName -TemplateUri $templateUri -projectName $projectName -adminUsername $adminUsername -adminPassword $adminPassword -dnsLabelPrefix $dnsPrefix ```
-Azure PowerShell is used to deploy the ARM template in this quickstart. The [Azure portal](../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), and [Rest API](../azure-resource-manager/templates/deploy-rest.md) can also be used to deploy templates.
+Azure PowerShell is used to deploy the ARM template in this quickstart. The [Azure portal](../azure-resource-manager/templates/deploy-portal.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), and [REST API](../azure-resource-manager/templates/deploy-rest.md) can also be used to deploy templates.
## Validate the deployment
backup Restore Azure File Share Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-azure-file-share-rest-api.md
The response should be handled in the same way as explained above for [full shar
## Next steps
-* Learn how to [manage Azure file shares backup using Rest API](manage-azure-file-share-rest-api.md).
+* Learn how to [manage Azure file shares backup using REST API](manage-azure-file-share-rest-api.md).
bastion Bastion Connect Vm Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-scale-set.md
Make sure that you have set up an Azure Bastion host for the virtual network in
## <a name="rdp"></a>Connect using RDP
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine scale set that you want to connect to.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine scale set that you want to connect to.
![navigate](./media/bastion-connect-vm-scale-set/1.png)
-2. Navigate to the virtual machine scale set instance that you want to connect to, then select **Connect**. When using an RDP connection, the virtual machine scale set should be a Windows virtual machine scale set.
+2. Go to the virtual machine scale set instance that you want to connect to, then select **Connect**. When using an RDP connection, the virtual machine scale set should be a Windows virtual machine scale set.
![virtual machine scale set](./media/bastion-connect-vm-scale-set/2.png) 3. After you select **Connect**, a side bar appears that has three tabs ΓÇô RDP, SSH, and Bastion. Select the **Bastion** tab from the side bar. If you didn't provision Bastion for the virtual network, you can select the link to configure Bastion. For configuration instructions, see [Configure Bastion](./tutorial-create-host-portal.md).
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="username"></a>Connect: Using username and password
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="privatekey"></a>Connect: Manually enter a private key
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot of the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="ssh"></a>Connect: Using a private key file
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot depicts the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
In order to connect to the Linux VM via SSH, you must have the following ports o
## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot showing the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-linux/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using *
## <a name="username"></a>Connect: Using username and password
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot of overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using *
## <a name="privatekey"></a>Connect: Manually enter a private key
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using *
## <a name="ssh"></a>Connect: Using a private key file
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot depicts the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-windows/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using *
## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
-1. Open the [Azure portal](https://portal.azure.com). Navigate to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
:::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot is the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png"::: 1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
bastion Configure Host Scaling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/configure-host-scaling.md
This article helps you add additional scale units (instances) to Azure Bastion t
## Configuration steps 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the Azure portal, navigate to your Bastion host.
+1. In the Azure portal, go to your Bastion host.
1. Host scaling instance count requires Standard tier. On the **Configuration** page, for **Tier**, verify the tier is **Standard**. If the tier is Basic, select **Standard** from the dropdown. :::image type="content" source="./media/configure-host-scaling/select-sku.png" alt-text="Screenshot of Select Tier." lightbox="./media/configure-host-scaling/select-sku.png":::
bastion Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/diagnostic-logs.md
As users connect to workloads using Azure Bastion, Bastion can log diagnostics o
## <a name="enable"></a>Enable the resource log
-1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Bastion resource and select **Diagnostics settings** from the Azure Bastion page.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Bastion resource and select **Diagnostics settings** from the Azure Bastion page.
![Screenshot that shows the "Diagnostics settings" page.](./media/diagnostic-logs/1diagnostics-settings.png) 2. Select **Diagnostics settings**, then select **+Add diagnostic setting** to add a destination for the logs.
To access your diagnostics logs, you can directly use the storage account that y
1. Navigate to your storage account resource, then to **Containers**. You see the **insights-logs-bastionauditlogs** blob created in your storage account blob container. ![diagnostics settings](./media/diagnostic-logs/1-navigate-to-logs.png)
-2. As you navigate to inside the container, you see various folders in your blob. These folders indicate the resource hierarchy for your Azure Bastion resource.
+2. As you go inside the container, you see various folders in your blob. These folders indicate the resource hierarchy for your Azure Bastion resource.
![add diagnostic setting](./media/diagnostic-logs/2-resource-h.png) 3. Navigate to the full hierarchy of your Azure Bastion resource whose diagnostics logs you wish to access/view. The 'y=', 'm=', 'd=', 'h=' and 'm=' indicate the year, month, day, hour, and minute respectively for the resource logs.
bastion Howto Metrics Monitor Alert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/howto-metrics-monitor-alert.md
The recommended values for this metric's configuration are:
## <a name="metrics"></a>How to view metrics
-1. To view metrics, navigate to your bastion host.
+1. To view metrics, go to your bastion host.
1. From the **Monitoring** list, select **Metrics**. 1. Select the parameters. If no metrics are set, click **Add metric**, and then select the parameters.
bastion Quickstart Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/quickstart-host-portal.md
When you deploy from VM settings, Bastion is automatically configured with defau
In this quickstart, you deploy Bastion from your virtual machine settings in the Azure portal. You don't connect and sign in to your virtual machine or deploy Bastion from your VM directly. 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the portal, navigate to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
+1. In the portal, go to the VM to which you want to connect. The values from the virtual network in which this VM resides will be used to create the Bastion deployment.
1. Select **Bastion** in the left menu. You can view some of the values that will be used when creating the bastion host for your virtual network. Select **Deploy Bastion**. :::image type="content" source="./media/quickstart-host-portal/deploy-bastion.png" alt-text="Screenshot of Deploy Bastion." lightbox="./media/quickstart-host-portal/deploy-bastion.png":::
bastion Session Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/session-monitoring.md
Once the Bastion service is provisioned and deployed in your virtual network, yo
## <a name="monitor"></a>Monitor remote sessions
-1. In the [Azure portal](https://portal.azure.com), navigate to your Azure Bastion resource and select **Sessions** from the Azure Bastion page.
+1. In the [Azure portal](https://portal.azure.com), go to your Azure Bastion resource and select **Sessions** from the Azure Bastion page.
![Screenshot shows the Azure portal menu Settings with Sessions selected.](./media/session-monitoring/sessions.png) 2. On the **Sessions** page, you can see the ongoing remote sessions on the right side.
bastion Upgrade Sku https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/upgrade-sku.md
This article helps you upgrade from the Basic Tier (SKU) to Standard. Once you u
## Configuration steps 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. In the Azure portal, navigate to your Bastion host.
+1. In the Azure portal, go to your Bastion host.
1. On the **Configuration** page, for **Tier**, select **Standard** from the dropdown. :::image type="content" source="./media/upgrade-sku/select-sku.png" alt-text="Screenshot of tier select dropdown with Standard selected." lightbox="./media/upgrade-sku/select-sku-expand.png":::
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
description: Understand the available actions you can use with Chaos Studio incl
Previously updated : 02/09/2022 Last updated : 03/03/2022
Known issues on Linux:
* The reboot fault causes a forced reboot to better simulate an outage event, which means there is the potential for data loss to occur. * The reboot fault is a **discrete** fault type. Unlike continuous faults, it is a one-time action and therefore has no duration.++
+## Cloud Services (Classic) shutdown
+
+| Property | Value |
+|-|-|
+| Capability Name | Shutdown-1.0 |
+| Target type | Microsoft-DomainName |
+| Description | Stops a deployment for the duration of the fault and restarts the deployment at the end of the fault duration or if the experiment is canceled. |
+| Prerequisites | None. |
+| Urn | urn:csci:microsoft:domainName:shutdown/1.0 |
+| Fault type | Continuous |
+| Parameters | None. |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:domainName:shutdown/1.0",
+ "parameters": [],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
+## Key Vault Deny Access
+| Property | Value |
+|-|-|
+| Capability Name | DenyAccess-1.0 |
+| Target type | Microsoft-KeyVault |
+| Description | Blocks all network access to a Key Vault by temporarily modifying the Key Vault network rules, preventing an application dependent on the Key Vault from accessing secrets, keys, and/or certificates. If the Key Vault allows access to all networks, this is changed to only allow access from selected networks with no virtual networks in the allowed list at the start of the fault and returned to allowing access to all networks at the end of the fault duration. If they Key Vault is set to only allow access from selected networks, any virtual networks in the allowed list are removed at the start of the fault and restored at the end of the fault duration. |
+| Prerequisites | The target Key Vault cannot have any firewall rules and must not be set to allow Azure services to bypass the firewall. If the target Key Vault is set to only allow access from selected networks, there must be at least one virtual network rule. The Key Vault cannot be in recover mode. |
+| Urn | urn:csci:microsoft:keyVault:denyAccess/1.0 |
+| Fault type | Continuous |
+| Parameters (key, value) | None. |
++
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:keyvault:denyAccess/1.0",
+ "parameters": [],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
chaos-studio Chaos Studio Fault Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-providers.md
The following are the supported resource types for faults, the target types, and
| Resource Type | Target name | Suggested role assignment | | - | - | - |
+| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor |
+| Microsoft.ClassicCompute/domainNames (service-direct) | Microsoft-DomainNames | Classic Virtual Machine Contributor |
| Microsoft.Compute/virtualMachines (agent-based) | Microsoft-Agent | Reader | | Microsoft.Compute/virtualMachineScaleSets (agent-based) | Microsoft-Agent | Reader | | Microsoft.Compute/virtualMachines (service-direct) | Microsoft-VirtualMachine | Virtual Machine Contributor | | Microsoft.Compute/virtualMachineScaleSets (service-direct) | Microsoft-VirtualMachineScaleSet | Virtual Machine Contributor |
-| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator |
| Microsoft.ContainerService/managedClusters (service-direct) | Microsoft-AzureKubernetesServiceChaosMesh | Azure Kubernetes Service Cluster User Role |
+| Microsoft.DocumentDb/databaseAccounts (CosmosDB, service-direct) | Microsoft-CosmosDB | Cosmos DB Operator |
+| Microsoft.KeyVault/vaults (service-direct) | Microsoft-KeyVault | Key Vault Contributor |
| Microsoft.Network/networkSecurityGroups (service-direct) | Microsoft-NetworkSecurityGroup | Network Contributor |
-| Microsoft.Cache/Redis (service-direct) | Microsoft-AzureCacheForRedis | Redis Cache Contributor |
cloud-services-extended-support In Place Migration Technical Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/in-place-migration-technical-details.md
This article discusses the technical details regarding the migration tool as per
### Service Configuration and Service Definition files - The .cscfg and .csdef files needs to be updated for Cloud Services (extended support) with minor changes. - The names of resources like virtual network and VM SKU are different. See [Translation of resources and naming convention post migration](#translation-of-resources-and-naming-convention-post-migration)-- Customers can retrieve their new deployments through [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) and [Rest API](/rest/api/compute/cloudservices/get).
+- Customers can retrieve their new deployments through [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) and [REST API](/rest/api/compute/cloudservices/get).
### Cloud Service and deployments - Each Cloud Services (extended support) deployment is an independent Cloud Service. Deployment are no longer grouped into a cloud service using slots.
As part of migration, the resource names are changed, and few Cloud Services fea
### Portal refreshed after Prepare. Experience restarted and Commit or Abort not visible anymore. - Portal stores the migration information locally and therefore after refresh, it will start from validate phase even if the Cloud Service is in the prepare phase. - You can use portal to go through the validate and prepare steps again to expose the Abort and Commit button. It will not cause any failures.-- Customers can use PowerShell or Rest API to abort or commit.
+- Customers can use PowerShell or REST API to abort or commit.
### How much time can the operations take?<br> Validate is designed to be quick. Prepare is longest running and takes some time depending on total number of role instances being migrated. Abort and commit can also take time but will take less time compared to prepare. All operations will time out after 24 hrs.
cloud-services-extended-support Post Migration Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services-extended-support/post-migration-changes.md
Minor changes are made to customerΓÇÖs .csdef and .cscfg file to make the deploy
- Classic sizes like Small, Large, ExtraLarge are replaced by their new size names, Standard_A*. The size names need to be changed to their new names in .csdef file. For more information, see [Cloud Services (extended support) deployment prerequisites](deploy-prerequisite.md#required-service-definition-file-csdef-updates) - Use the Get API to get the latest copy of the deployment files.
- - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [Rest API](/rest/api/resources/resourcegroups/exporttemplate)
- - Get the .csdef file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
- - Get the .cscfg file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [Rest API](/rest/api/compute/cloudservices/rest-get-package).
+ - Get the template using [Portal](../azure-resource-manager/templates/export-template-portal.md), [PowerShell](../azure-resource-manager/management/manage-resource-groups-powershell.md#export-resource-groups-to-templates), [CLI](../azure-resource-manager/management/manage-resource-groups-cli.md#export-resource-groups-to-templates), and [REST API](/rest/api/resources/resourcegroups/exporttemplate)
+ - Get the .csdef file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [REST API](/rest/api/compute/cloudservices/rest-get-package).
+ - Get the .cscfg file using [PowerShell](/powershell/module/az.cloudservice/?preserve-view=true&view=azps-5.4.0#cloudservice) or [REST API](/rest/api/compute/cloudservices/rest-get-package).
Customers need to update their tooling and automation to start using the new API
## Changes to Certificate Management Post Migration
-As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or Rest API.
+As a standard practice to manage your certificates, all the valid .pfx certificate files should be added to certificate store in Key Vault and update would work perfectly fine via any client - Portal, PowerShell or REST API.
Currently, Azure Portal does a validation for you to check if all the required Certificates are uploaded in certificate store in Key Vault and warns if a certificate is not found. However, if you are planning to use Certificates as secrets, then these certificates cannot be validated for their thumbprint and any update operation which involves addition of secrets would fail via Portal. Customers are reccomended to use PowerShell or RestAPI to continue updates involving Secrets.
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 2/15/2022 Last updated : 3/2/2022
The following tables show the Microsoft Security Response Center (MSRC) updates
## February 2022 Guest OS
->[!NOTE]
-
->The February Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the February Guest OS. This list is subject to change.
->
| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced | | | | | | |
-| Rel 22-02 | [5010351] | Latest Cumulative Update(LCU) | 6.41 | Feb 8, 2022 |
-| Rel 22-02 | [5006671] | IE Cumulative Updates | 2.120, 3.107, 4.100 | Oct 12, 2021 |
-| Rel 22-02 | [5010354] | Latest Cumulative Update(LCU) | 7.9 | Feb 8, 2022 |
-| Rel 22-02 | [5010359] | Latest Cumulative Update(LCU) | 5.65 | Feb 8, 2022 |
-| Rel 22-02 | [5008867] | .NET Framework 3.5 Security and Quality Rollup | 2.120 | Jan 11, 2022 |
-| Rel 22-02 | [5008860] | .NET Framework 4.5.2 Security and Quality Rollup | 2.120 | Jan 11, 2022 |
-| Rel 22-02 | [5008868] | .NET Framework 3.5 Security and Quality Rollup | 4.100 | Jan 11, 2022 |
-| Rel 22-02 | [5008870] | .NET Framework 4.5.2 Security and Quality Rollup | 4.100 | Jan 11, 2022 |
-| Rel 22-02 | [5008865] | .NET Framework 3.5 Security and Quality Rollup | 3.107 | Jan 11, 2022 |
-| Rel 22-02 | [5008869] | . NET Framework 4.5.2 Security and Quality Rollup | 3.107 | Jan 11, 2022 |
-| Rel 22-02 | [5008873] | . NET Framework 3.5 and 4.7.2 Cumulative Update | 6.41 | Jan 11, 2022 |
-| Rel 22-02 | [5008882] | .NET Framework 4.8 Security and Quality Rollup | 7.9 | Jan 11, 2022 |
-| Rel 22-02 | [5010404] | Monthly Rollup | 2.120 | Feb 8, 2022 |
-| Rel 22-02 | [5010392] | Monthly Rollup | 3.107 | Feb 8, 2022 |
-| Rel 22-02 | [5010419] | Monthly Rollup | 4.100 | Feb 8, 2022 |
-| Rel 22-02 | [5001401] | Servicing Stack update | 3.107 | Apr 13, 2021 |
-| Rel 22-02 | [5001403] | Servicing Stack update | 4.100 | Apr 13, 2021 |
-| Rel 22-02 | [4578013] | Standalone Security Update | 4.100 | Aug 19, 2020 |
-| Rel 22-02 | [5005698] | Servicing Stack update | 5.65 | Sep 14, 2021 |
-| Rel 22-02 | [5010451] | Servicing Stack update | 2.120 | Feb 8, 2022 |
-| Rel 22-02 | [4494175] | Microcode | 5.65 | Sep 1, 2020 |
-| Rel 22-02 | [4494174] | Microcode | 6.41 | Sep 1, 2020 |
+| Rel 22-02 | [5010351] | Latest Cumulative Update(LCU) | [6.41] | Feb 8, 2022 |
+| Rel 22-02 | [5006671] | IE Cumulative Updates | [2.120], [3.107], [4.100] | Oct 12, 2021 |
+| Rel 22-02 | [5010354] | Latest Cumulative Update(LCU) | [7.9] | Feb 8, 2022 |
+| Rel 22-02 | [5010359] | Latest Cumulative Update(LCU) | [5.65] | Feb 8, 2022 |
+| Rel 22-02 | [5008867] | .NET Framework 3.5 Security and Quality Rollup | [2.120] | Jan 11, 2022 |
+| Rel 22-02 | [5008860] | .NET Framework 4.5.2 Security and Quality Rollup | [2.120] | Jan 11, 2022 |
+| Rel 22-02 | [5008868] | .NET Framework 3.5 Security and Quality Rollup | [4.100] | Jan 11, 2022 |
+| Rel 22-02 | [5008870] | .NET Framework 4.5.2 Security and Quality Rollup | [4.100] | Jan 11, 2022 |
+| Rel 22-02 | [5008865] | .NET Framework 3.5 Security and Quality Rollup | [3.107] | Jan 11, 2022 |
+| Rel 22-02 | [5008869] | . NET Framework 4.5.2 Security and Quality Rollup | [3.107] | Jan 11, 2022 |
+| Rel 22-02 | [5008873] | . NET Framework 3.5 and 4.7.2 Cumulative Update | [6.41] | Jan 11, 2022 |
+| Rel 22-02 | [5008882] | .NET Framework 4.8 Security and Quality Rollup | [7.9] | Jan 11, 2022 |
+| Rel 22-02 | [5010404] | Monthly Rollup | [2.120] | Feb 8, 2022 |
+| Rel 22-02 | [5010392] | Monthly Rollup | [3.107] | Feb 8, 2022 |
+| Rel 22-02 | [5010419] | Monthly Rollup | [4.100] | Feb 8, 2022 |
+| Rel 22-02 | [5001401] | Servicing Stack update | [3.107] | Apr 13, 2021 |
+| Rel 22-02 | [5001403] | Servicing Stack update | [4.100] | Apr 13, 2021 |
+| Rel 22-02 | [4578013] | Standalone Security Update | [4.100] | Aug 19, 2020 |
+| Rel 22-02 | [5005698] | Servicing Stack update | [5.65] | Sep 14, 2021 |
+| Rel 22-02 | [5010451] | Servicing Stack update | [2.120] | Feb 8, 2022 |
+| Rel 22-02 | [4494175] | Microcode | [5.65] | Sep 1, 2020 |
+| Rel 22-02 | [4494174] | Microcode | [6.41] | Sep 1, 2020 |
[5010351]: https://support.microsoft.com/kb/5010351 [5006671]: https://support.microsoft.com/kb/5006671
The following tables show the Microsoft Security Response Center (MSRC) updates
[5010451]: https://support.microsoft.com/kb/5010451 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.120]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.107]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.100]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.65]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.41]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.9]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## January 2022 Guest OS | Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 2/11/2022 Last updated : 3/2/2022 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates +
+###### **March 2, 2022**
+The February Guest OS has released.
+ ###### **February 11, 2022** The January Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.9_202202-01 | March 2, 2022 | Post 7.11 |
| WA-GUEST-OS-7.8_202201-02 | February 11, 2022 | Post 7.10 |
-| WA-GUEST-OS-7.6_202112-01 | January 10, 2022 | Post 7.9 |
+|~~WA-GUEST-OS-7.6_202112-01~~| January 10, 2022 | March 2, 2022 |
|~~WA-GUEST-OS-7.5_202111-01~~| November 19, 2021 | February 11, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.41_202202-01 | March 2, 2022 | Post 6.43 |
| WA-GUEST-OS-6.40_202201-02 | February 11, 2022 | Post 6.42 |
-| WA-GUEST-OS-6.38_202112-01 | January 10, 2022 | Post 6.41 |
+|~~WA-GUEST-OS-6.38_202112-01~~| January 10, 2022 | March 2, 2022 |
|~~WA-GUEST-OS-6.37_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-6.36_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-6.35_202109-01~~| October 8, 2021 | November 19, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.65_202202-01 | March 2, 2022 | Post 5.67 |
| WA-GUEST-OS-5.64_202201-02 | February 11, 2022 | Post 5.66 |
-| WA-GUEST-OS-5.62_202112-01 | January 10, 2022 | Post 5.67 |
+|~~WA-GUEST-OS-5.62_202112-01~~| January 10, 2022 | March 2, 2022 |
|~~WA-GUEST-OS-5.61_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-5.60_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-5.59_202109-01~~| October 8, 2021 | November 19, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.100_202202-01 | March 2, 2022 | Post 4.102 |
| WA-GUEST-OS-4.99_202201-02 | February 11 , 2022 | Post 4.101 |
-| WA-GUEST-OS-4.97_202112-01 | January 10 , 2022 | Post 4.100 |
+|~~WA-GUEST-OS-4.97_202112-01~~| January 10 , 2022 | March 2, 2022 |
|~~WA-GUEST-OS-4.96_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-4.95_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-4.94_202109-01~~| October 8, 2021 | November 19, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.107_202202-01 | March 2, 2022 | Post 3.109 |
| WA-GUEST-OS-3.106_202201-02 | February 11, 2022 | Post 3.108 |
-| WA-GUEST-OS-3.104_202112-01 | January 10, 2022 | Post 3.107 |
+|~~WA-GUEST-OS-3.104_202112-01~~| January 10, 2022 | March 2, 2022|
|~~WA-GUEST-OS-3.103_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-3.102_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-3.101_202109-01~~| October 8, 2021 | November 19, 2021 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
-| WA-GUEST-OS-2.119_202201-02 | February 11, 2022 | Post 2.119 |
-| WA-GUEST-OS-2.117_202112-01 | January 10, 2022 | Post 2.120 |
+| WA-GUEST-OS-2.120_202202-01 | March 2, 2022 | Post 2.122 |
+| WA-GUEST-OS-2.119_202201-02 | February 11, 2022 | Post 2.121 |
+|~~WA-GUEST-OS-2.117_202112-01~~| January 10, 2022 | March 2, 2022 |
|~~WA-GUEST-OS-2.116_202111-01~~| November 19, 2021 | February 11, 2022 | |~~WA-GUEST-OS-2.115_202110-01~~| November 1, 2021 | January 10, 2022 | |~~WA-GUEST-OS-2.114_202109-01~~| October 8, 2021 | November 19, 2021 |
cloud-services Schema Cscfg Networkconfiguration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/schema-cscfg-networkconfiguration.md
The following example shows the `NetworkConfiguration` element and its child ele
</Subnets> </InstanceAddress> <ReservedIPs>
- <ReservedIP name="<reserved-ip-name>"/>
+ <ReservedIP name="GROUP <ResourceGroupNameOfReservedIP> <reserved-ip-name>"/>
</ReservedIPs> </AddressAssignments> </NetworkConfiguration>
The following table describes the child elements of the `NetworkConfiguration` e
| ReservedIP | Optional. Specifies the reserved IP address that should be associated with the deployment. You must use Create Reserved IP Address to create the reserved IP address. Each deployment in a cloud service can be associated with one reserved IP address. The name of the reserved IP address is defined by a string for the `name` attribute.| ## See Also
-[Cloud Service (classic) Configuration Schema](schema-cscfg-file.md)
+[Cloud Service (classic) Configuration Schema](schema-cscfg-file.md)
cognitive-services How To Custom Speech Model And Endpoint Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-model-and-endpoint-lifecycle.md
What happens when a model expires and how to update the model depends on how it
### Batch transcription If a model expires that is used with [batch transcription](batch-transcription.md) transcription requests will fail with a 4xx error. To prevent this update the `model` parameter in the JSON sent in the **Create Transcription** request body to either point to a more recent base model or more recent custom model. You can also remove the `model` entry from the JSON to always use the latest base model. ### Custom speech endpoint
-If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. To update a model you are using, you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) Rest API.
+If a model expires that is used by a [custom speech endpoint](how-to-custom-speech-train-model.md), then the service will automatically fall back to using the latest base model for the language you are using. To update a model you are using, you can select **Deployment** in the **Custom Speech** menu at the top of the page and then click on the endpoint name to see its details. At the top of the details page, you will see an **Update Model** button that lets you seamlessly update the model used by this endpoint without downtime. You can also make this change programmatically by using the [**Update Model**](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/UpdateModel) REST API.
## Next steps
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/language-support.md
| Amharic | `am` |Γ£ö|Γ£ö|||| | Arabic | `ar` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Armenian | `hy` |Γ£ö|Γ£ö||Γ£ö||
-| Assamese | `as` |Γ£ö|Γ£ö||||
+| Assamese | `as` |Γ£ö|Γ£ö|Γ£ö|||
| Azerbaijani | `az` |Γ£ö|Γ£ö|||| | Bangla | `bn` |Γ£ö|Γ£ö|Γ£ö||Γ£ö| | Bashkir | `ba` |Γ£ö|||||
| Macedonian | `mk` |Γ£ö|||Γ£ö|| | Malagasy | `mg` |Γ£ö|Γ£ö|Γ£ö||| | Malay | `ms` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Malayalam | `ml` |Γ£ö|Γ£ö||||
+| Malayalam | `ml` |Γ£ö|Γ£ö|Γ£ö|||
| Maltese | `mt` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Maori | `mi` |Γ£ö|Γ£ö|Γ£ö||| | Marathi | `mr` |Γ£ö|Γ£ö|Γ£ö|||
| Myanmar | `my` |Γ£ö|Γ£ö||Γ£ö|| | Nepali | `ne` |Γ£ö|Γ£ö|||| | Norwegian | `nb` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Odia | `or` |Γ£ö|Γ£ö||||
+| Odia | `or` |Γ£ö|Γ£ö|Γ£ö|||
| Pashto | `ps` |Γ£ö|Γ£ö||Γ£ö|| | Persian | `fa` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Polish | `pl` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| Swahili | `sw` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Swedish | `sv` |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | Tahitian | `ty` |Γ£ö| |Γ£ö|Γ£ö||
-| Tamil | `ta` |Γ£ö|Γ£ö|||Γ£ö|
+| Tamil | `ta` |Γ£ö|Γ£ö|Γ£ö||Γ£ö|
| Tatar | `tt` |Γ£ö|||||
-| Telugu | `te` |Γ£ö|Γ£ö||||
+| Telugu | `te` |Γ£ö|Γ£ö|Γ£ö|||
| Thai | `th` |Γ£ö| |Γ£ö|Γ£ö|Γ£ö| | Tibetan | `bo` |Γ£ö|||| | Tigrinya | `ti` |Γ£ö|Γ£ö||||
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
+
+ Title: Data limits for Language service features
+
+description: Data and service limitations for Azure Cognitive Service for Language features.
++++++ Last updated : 02/25/2022+++
+# Service limits for Azure Cognitive Service for Language
+
+> [!NOTE]
+> This article only describes the limits for pre-configured features in Azure Cognitive Service for Language:
+> To see the service limits for customizable features, see the following articles:
+> * [Custom classification](../custom-classification/service-limits.md)
+> * [Custom NER](../custom-named-entity-recognition/service-limits.md)
+> * [Conversational language understanding](../conversational-language-understanding/service-limits.md)
+> * [Question answering](../question-answering/concepts/limits.md)
+
+Use this article to find the limits for the size, and rates that you can send data to the following features of the language service.
+* [Named Entity Recognition (NER)](../named-entity-recognition/overview.md)
+* [Personally Identifiable Information (PII) detection](../personally-identifiable-information/overview.md)
+* [Key phrase extraction](../key-phrase-extraction/overview.md)
+* [Entity linking](../entity-linking/overview.md)
+* [Text Analytics for health](../text-analytics-for-health/overview.md)
+* [Sentiment analysis and opinion mining](../sentiment-opinion-mining/overview.md)
+* [Language detection](../language-detection/overview.md)
+
+When using features of the Language service, keep the following in mind:
+
+* Pricing is not affected by data or rate limits. Pricing is based on the number of text records you send to the API, and is subject to your Language resource's [pricing details](https://aka.ms/unifiedLanguagePricing).
+ * A text record is measured as 1000 characters.
+* Data and rate limits are based on the number of documents you send to the API. If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
+* A document is a single string of text characters.
+
+## Maximum characters per document
+
+The following limit specifies the maximum number of characters that can be in a single document.
+
+| Feature | Value |
+|||
+| Text Analytics for health | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
+| All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
+| All other pre-configured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). |
+
+If a document exceeds the character limit, the API will behave differently depending on how you're sending requests.
+
+If you're sending requests synchronously:
+* The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
+
+If you're sending requests [asynchronously](use-asynchronously.md):
+* The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
+
+## Maximum request size
+
+The following limit specifies the maximum size of documents contained in the entire request.
+
+| Feature | Value |
+|||
+| All pre-configured features | 1MB |
+
+## Maximum documents per request
+
+Exceeding the following document limits will generate an HTTP 400 error code.
+
+> [!NOTE]
+> When sending asynchronous API requests, you can send a maximum of 25 documents per request.
+
+| Feature | Max Documents Per Request |
+|-|--|
+| Language Detection | 1000 |
+| Sentiment Analysis | 10 |
+| Opinion Mining | 10 |
+| Key Phrase Extraction | 10 |
+| Named Entity Recognition (NER) | 5 |
+| Personally Identifying Information (PII) detection | 5 |
+| Text summarization | 25 |
+| Entity Linking | 5 |
+| Text Analytics for health | 10 for the web-based API, 1000 for the container. |
+
+## Rate limits
+
+Your rate limit will vary with your [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/). These limits are the same for both versions of the API. These rate limits don't apply to the Text Analytics for health container, which does not have a set rate limit.
+
+| Tier | Requests per second | Requests per minute |
+||||
+| S / Multi-service | 1000 | 1000 |
+| S0 / F0 | 100 | 300 |
+
+Requests rates are measured for each feature separately. You can send the maximum number of requests for your pricing tier to each feature, at the same time. For example, if you're in the `S` tier and send 1000 requests at once, you wouldn't be able to send another request for 59 seconds.
+
+## See also
+
+* [What is Azure Cognitive Service for Language](../overview.md)
+* [Pricing details](https://aka.ms/unifiedLanguagePricing)
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/how-to/call-api.md
To send an API request, You will need a Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-Using entity linking synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using entity linking synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
Using entity linking synchronously is stateless. No data is stored in your accou
You can stream the results to an application, or save the output to a file on the local system.
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB |
-| Max Documents Per Request | 5 |
-
-If a document exceeds the character limit, the API will behave differently depending on whether you're using it synchronously or asynchronously:
-
-* Asynchronous: The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous: The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## See also
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/call-api.md
Previously updated : 12/10/2021 Last updated : 03/01/2022
To send an API request, You will need your Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-Using the key phrase extraction feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using the key phrase extraction feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
Using the key phrase extraction feature synchronously is stateless. No data is s
When you receive results from the API, the order of the returned key phrases is determined internally, by the model. You can stream the results to an application, or save the output to a file on the local system.
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document (synchronous) | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB. |
-| Max Documents Per Request | 10 |
-
-If a document exceeds the character limit, the API will behave differently depending on the endpoint you're using:
-
-* Asynchronous: The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous: The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## Next steps
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/use-containers.md
Containers enable you to host the Key Phrase Extraction API on your own infrastr
> [!NOTE]
-> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data Limits](call-api.md#data-limits).
+> * The free account is limited to 5,000 text records per month and only the **Free** and **Standard** [pricing tiers](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics) are valid for containers. For more information on transaction request rates, see [Data and service limits](../../concepts/data-limits.md).
Containers enable you to run the Key Phrase Extraction APIs in your own environment and are great for your specific security and data governance requirements. The Key Phrase Extraction containers provide advanced natural language processing over raw text, and include three main functions: sentiment analysis, Key Phrase Extraction, and language detection.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/call-api.md
Previously updated : 12/03/2021 Last updated : 03/01/2022
If you have content expressed in a less frequently used language, you can try th
> [!TIP] > You can use a [Docker container](use-containers.md)for language detection, so you can use the API on-premises.
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-Using the language detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using the language detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
The resulting output consists of the predominant language, with a score of less
} ```
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document (synchronous) | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB. |
-| Max Documents Per Request | 1000 |
-
-If a document exceeds the character limit, the API will behave differently depending on the endpoint you're using:
-
-* Asynchronous: The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous: The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## See also
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/how-to-call.md
Previously updated : 12/10/2021 Last updated : 03/01/2022
When you submit documents to be processed, you can specify which of [the support
## Submitting data
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits section below.
-
-Using the NER feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using the NER feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../includes/async-result-availability.md)]
The API will attempt to detect the [defined entity categories](concepts/named-en
When you get results from NER, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/named-entity-categories.md), including their categories and sub-categories, and confidence scores.
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document (synchronous) | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB. |
-| Max documents per request | 5 |
-
-If a document exceeds the character limit, the API will behave differently depending on the feature you're using:
-
-* Asynchronous:
- * The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous:
- * The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-Exceeding the maximum number of documents you can send in a single request will generate an HTTP 400 error code.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## Next steps
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call.md
Previously updated : 12/03/2021 Last updated : 03/01/2022
When you submit documents to be processed, you can specify which of [the support
## Submitting data
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits section below.
-
-Using the PII detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using the PII detection feature synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../includes/async-result-availability.md)]
The API will attempt to detect the [defined entity categories](concepts/entity-c
When you get results from PII detection, you can stream the results to an application or save the output to a file on the local system. The API response will include [recognized entities](concepts/entity-categories.md), including their categories and sub-categories, and confidence scores. The text string with the PII entities redacted will also be returned.
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document (synchronous) | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB. |
-| Max documents per request | 5 |
-
-If a document exceeds the character limit, the API will behave differently depending on the feature you're using:
-
-* Asynchronous:
- * The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous:
- * The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-Exceeding the maximum number of documents you can send in a single request will generate an HTTP 400 error code.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## Next steps
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md
Previously updated : 12/03/2021 Last updated : 03/01/2022
To send an API request, you will need your Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits section below.
-
-Using the sentiment analysis and opinion mining features synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
+Analysis is performed upon receipt of the request. Using the sentiment analysis and opinion mining features synchronously is stateless. No data is stored in your account, and results are returned immediately in the response.
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
Opinion Mining will locate targets (nouns or verbs) in the text, and their assoc
The API returns opinions as a target (noun or verb) and an assessment (adjective).
+## Service and data limits
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document (synchronous) | 5,120 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum number of characters per request (asynchronous) | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB. |
-| Max documents per request | 10 |
-
-If a document exceeds the character limit, the API will behave differently depending on the feature you're using:
-
-* Asynchronous:
- * The API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-* Synchronous:
- * The API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-Exceeding the maximum number of documents you can send in a single request will generate an HTTP 400 error code.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
-
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## See also
cognitive-services Use Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/use-containers.md
Previously updated : 11/29/2021 Last updated : 03/01/2022 keywords: on-premises, Docker, container, sentiment analysis, natural language processing
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
Previously updated : 12/03/2021 Last updated : 03/01/2022
To send an API request, You will need your Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
-Analysis is performed upon receipt of the request. For information on the size and number of requests you can send per minute and second, see the data limits below.
-
-If you send a request using the REST API or client library, the results will be returned asynchronously. If you're using the Docker container, they will be returned synchronously.
+Analysis is performed upon receipt of the request. If you send a request using the REST API or client library, the results will be returned asynchronously. If you're using the Docker container, they will be returned synchronously.
[!INCLUDE [asynchronous-result-availability](../../includes/async-result-availability.md)]
Depending on your API request, and the data you submit to the Text Analytics for
[!INCLUDE [Text Analytics for health features](../includes/features.md)]
-## Data limits
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API. For best results, split text between sentences.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum size of a single document | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Maximum size of entire request | 1 MB |
-| Max Documents Per Request | 10 for the web-based API, 1000 for the container. |
-
-If a document exceeds the character limit, the API won't process a document that exceeds the maximum size, and will return an invalid document error for it. If an API request has multiple documents, the API will continue processing them if they are within the character limit.
-
-When you send a document larger than 5,120 characters, it will be split by Text Analytics for health into chunks of 5,120 characters. If two entities are present on either side of a split that are related, the model will not be able to detect the relation. To prevent potential relations from being undetected, consider splitting your text into documents of 5,120 characters or less, consisting only of full sentences.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing). These limits are the same for both versions of the API. These rate limits don't apply to the Text Analytics for health container, which does not have a set rate limit.
+## Service and data limits
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| F0 | 100 | 300 |
## See also
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Previously updated : 02/16/2022 Last updated : 03/01/2022
To use this feature, you submit raw unstructured text for analysis and handle th
## Input requirements and service limits
-* Text Analytics for health takes raw unstructured text for analysis. See the [data and service limits](how-to/call-api.md#data-limits) in the how-to guide for more information.
+* Text Analytics for health takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) for more information.
* Text Analytics for health works with a variety of written languages. See [language support](language-support.md) for more information. ## Reference documentation and code samples
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/how-to/call-api.md
Previously updated : 12/10/2021 Last updated : 03/01/2022
When you submit documents to be processed by key phrase extraction, you can spec
## Submitting data
-You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below.
+You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request, and receiving the results.
When using this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
Using the above example, the API might return the following summarized sentences
*"At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better."*
-## Data limits
+## Service and data limits
-This section describes the limits for the size, and rates that you can send data to the Text Summarization API. Note that pricing is not affected by the data limits or rate limits. Pricing is subject to your Language resource [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
-
-> [!NOTE]
-> * If you need to analyze larger documents than the limit allows, you can break the text into smaller chunks of text before sending them to the API.
-> * A document is a single string of text characters.
-
-| Limit | Value |
-|||
-| Maximum number of characters per request | 125K characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
-| Max Documents Per Request | 25 |
-
-If a document exceeds the limits, the API will reject the entire request and return a `400 bad request` error if any document within it exceeds the maximum size.
-
-### Rate limits
-
-Your rate limit will vary with your [pricing tier](https://aka.ms/unifiedLanguagePricing).
-
-| Tier | Requests per second | Requests per minute |
-||||
-| S / Multi-service | 1000 | 1000 |
-| S0 / F0 | 100 | 300 |
## See also
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/overview.md
Previously updated : 01/26/2022 Last updated : 03/01/2022
To use this feature, you submit raw unstructured text for analysis and handle th
## Input requirements and service limits
-* Text summarization takes raw unstructured text for analysis. See the [data and service limits](how-to/call-api.md#data-limits) in the how-to guide for more information.
+* Text summarization takes raw unstructured text for analysis. See [Data and service limits](../concepts/data-limits.md) in the how-to guide for more information.
* Text summarization works with a variety of written languages. See [language support](language-support.md) for more information. ## Reference documentation and code samples
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
For the Communication clients to be able to cancel ongoing refresh tasks, it's n
```javascript var controller = new AbortController();
-var signal = controller.signal;
var joinChatBtn = document.querySelector('.joinChat'); var leaveChatBtn = document.querySelector('.leaveChat');
-joinChatBtn.addEventListener('click', function() {
+joinChatBtn.addEventListener('click', function () {
// Wrong: const tokenCredentialWrong = new AzureCommunicationTokenCredential({
- tokenRefresher: async () => fetchTokenFromMyServerForUser("<user_name>")
- });
+ tokenRefresher: async () => fetchTokenFromMyServerForUser("<user_name>")
+ });
// Correct: Pass abortSignal through the arrow function const tokenCredential = new AzureCommunicationTokenCredential({
- tokenRefresher: async (abortSignal) => fetchTokenFromMyServerForUser(abortSignal, "<user_name>")
- });
-
+ tokenRefresher: async (abortSignal) => fetchTokenFromMyServerForUser(abortSignal, "<user_name>")
+ });
+ // ChatClient is now able to abort token refresh tasks const chatClient = new ChatClient("<endpoint-url>", tokenCredential);
+ // Pass the abortSignal to the chat client through options
+ const createChatThreadResult = await chatClient.createChatThread(
+ { topic: "Hello, World!" },
+ {
+ // ...
+ abortSignal: controller.signal
+ }
+ );
+ // ... });
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
There are many options for teams to build and deploy cloud native and containeri
- [Azure Kubernetes Service](#azure-kubernetes-service) - [Azure Functions](#azure-functions) - [Azure Spring Cloud](#azure-spring-cloud)
+- [Azure Red Hat OpenShift](#azure-red-hat-openshift)
There's no perfect solution for every use case and every team. The following explanation provides general guidance and recommendations as a starting point to help find the best fit for your team and your requirements.
Azure Container Apps enables you to build serverless microservices based on cont
Azure Container Apps doesn't provide direct access to the underlying Kubernetes APIs. If you require access to the Kubernetes APIs and control plane, you should use [Azure Kubernetes Service](../aks/intro-kubernetes.md). However, if you would like to build Kubernetes-style applications and don't require direct access to all the native Kubernetes APIs and cluster management, Container Apps provides a fully managed experience based on best-practices. For these reasons, many teams may prefer to start building container microservices with Azure Container Apps.
+You can get started building your first container app [using the quickstarts](get-started.md).
+ ### Azure App Service Azure App Service provides fully managed hosting for web applications including websites and web APIs. These web applications may be deployed using code or containers. Azure App Service is optimized for web applications. Azure App Service is integrated with other Azure services including Azure Container Apps or Azure Functions. When building web apps, Azure App Service is an ideal option.
Azure Functions is a serverless Functions-as-a-Service (FaaS) solution. It's opt
### Azure Spring Cloud Azure Spring Cloud makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Cloud is an ideal option.
+### Azure Red Hat OpenShift
+Azure Red Hat OpenShift is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
+ ## Next steps > [!div class="nextstepaction"]
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
To use a managed identity, the identity must be granted access to one or more Az
### Limitations * Currently you can't use a managed identity in a container group deployed to a virtual network.
-* You can't use a managed identity to pull an image from Azure Container Registry when creating a container group. The identity is only available within a running container.
[!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
cosmos-db Defender For Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/defender-for-cosmos-db.md
You can configure Microsoft Defender protection in any of several ways, describe
# [REST API](#tab/rest-api)
-Use Rest API commands to create, update, or get the Azure Defender setting for a specific Azure Cosmos DB account.
+Use REST API commands to create, update, or get the Azure Defender setting for a specific Azure Cosmos DB account.
* [Advanced Threat Protection - Create](/rest/api/securitycenter/advancedthreatprotection/create) * [Advanced Threat Protection - Get](/rest/api/securitycenter/advancedthreatprotection/get)
cosmos-db Table Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/table-import.md
Title: Migrate existing data to a Table API account in Azure Cosmos DB description: Learn how to migrate or import on-premises or cloud data to an Azure Table API account in Azure Cosmos DB.--++ Previously updated : 11/08/2021 Last updated : 03/03/2022
This tutorial covers the following tasks:
## Data migration tool
+> [!IMPORTANT]
+> Ownership of the Data Migration Tool has been transferred to a 3rd party who is acting as maintainers of this tool which is open source. The tool is currently being updated to use the latest nuget packages so does not currently work on the main branch. There is a fork of this tool which does work. You can learn more [here](https://github.com/Azure/azure-documentdb-datamigrationtool/issues/89).
+ You can use the command-line data migration tool (dt.exe) in Azure Cosmos DB to import your existing Azure Table Storage data to a Table API account. To migrate table data:
cost-management-billing Allocate Costs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/allocate-costs.md
The following items are currently unsupported by the cost allocation public prev
## Next steps - Read the [Cost Management + Billing FAQ](../cost-management-billing-faq.yml) for questions and answers about cost allocation.-- Create or update allocation rules using the [Cost allocation Rest API](/rest/api/cost-management/costallocationrules)
+- Create or update allocation rules using the [Cost allocation REST API](/rest/api/cost-management/costallocationrules)
- Learn more about [How to optimize your cloud investment with Cost Management](cost-mgt-best-practices.md)
cost-management-billing Tutorial Acm Opt Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-acm-opt-recommendations.md
The list of recommendations identifies usage inefficiencies or shows purchase re
The **Impact** category, along with the **Potential yearly savings**, are designed to help identify recommendations that have the potential to save as much as possible. High impact recommendations include:-- [Buy reserved virtual machine instances to save money over pay-as-you-go costs](../../advisor/advisor-cost-recommendations.md#buy-reserved-virtual-machine-instances-to-save-money-over-pay-as-you-go-costs)
+- [Buy reserved virtual machine instances to save money over pay-as-you-go costs](../../advisor/advisor-reference-cost-recommendations.md#buy-virtual-machine-reserved-instances-to-save-money-over-pay-as-you-go-costs)
- [Optimize virtual machine spend by resizing or shutting down underutilized instances](../../advisor/advisor-cost-recommendations.md#optimize-virtual-machine-spend-by-resizing-or-shutting-down-underutilized-instances)-- [Use Standard Storage to store Managed Disks snapshots](../../advisor/advisor-cost-recommendations.md#use-standard-snapshots-for-managed-disks)
+- [Use Standard Storage to store Managed Disks snapshots](../../advisor/advisor-reference-cost-recommendations.md#use-standard-storage-to-store-managed-disks-snapshots)
Medium impact recommendations include:-- [Delete Azure Data Factory pipelines that are failing](../../advisor/advisor-cost-recommendations.md#delete-azure-data-factory-pipelines-that-are-failing)-- [Reduce costs by eliminating un-provisioned ExpressRoute circuits](../../advisor/advisor-cost-recommendations.md#reduce-costs-by-eliminating-unprovisioned-expressroute-circuits)-- [Reduce costs by deleting or reconfiguring idle virtual network gateways](../../advisor/advisor-cost-recommendations.md#reduce-costs-by-deleting-or-reconfiguring-idle-virtual-network-gateways)
+- [Reduce costs by eliminating un-provisioned ExpressRoute circuits](../../advisor/advisor-reference-cost-recommendations.md#delete-expressroute-circuits-in-the-provider-status-of-not-provisioned)
+- [Reduce costs by deleting or reconfiguring idle virtual network gateways](../../advisor/advisor-reference-cost-recommendations.md#repurpose-or-delete-idle-virtual-network-gateways)
## Act on a recommendation
cost-management-billing Billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/billing-subscription-transfer.md
If you're an Enterprise Agreement (EA) customer, your enterprise administrator c
Only the billing administrator of an account can transfer ownership of a subscription.
-When you send or accept transfer request, you agree to terms and conditions. For more information, see [Transfer terms and conditions](subscription-transfer.md#transfer-terms-and-conditions).
+When you send or accept a transfer request, you agree to terms and conditions. For more information, see [Transfer terms and conditions](subscription-transfer.md#transfer-terms-and-conditions).
## Transfer billing ownership of an Azure subscription
-1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator of the billing account that has the subscription that you want to transfer. If you're not sure if you're and administrator, or if you need to determine who is, see [Determine account billing administrator](add-change-subscription-administrator.md#whoisaa).
+1. Sign in to the [Azure portal](https://portal.azure.com) as an administrator of the billing account that has the subscription that you want to transfer. If you're not sure if you're an administrator, or if you need to determine who is, see [Determine account billing administrator](add-change-subscription-administrator.md#whoisaa).
1. Search for **Cost Management + Billing**. ![Screenshot that shows Azure portal search](./media/billing-subscription-transfer/billing-search-cost-management-billing.png) 1. Select **Subscriptions** from the left-hand pane. Depending on your access, you may need to select a billing scope and then select **Subscriptions** or **Azure subscriptions**.
An Azure Active Directory (AD) tenant is created for you when you sign up for Az
When you create a new subscription, it's hosted in your account's Azure AD tenant. If you want to give others access to your subscription or its resources, you need to invite them to join your tenant. Doing so helps you control access to your subscriptions and resources.
-When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. If you do so, all users, groups, or service principals that had [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to manage subscriptions and its resources lose their access. Only the user in the new account who accepts your transfer request will have access to manage the resources. The new owner must manually add these users to the subscription to provide access to the use who lost it. For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
+When you transfer billing ownership of your subscription to an account in another Azure AD tenant, you can move the subscription to the new account's tenant. If you do so, all users, groups, or service principals that had [Azure role assignments](../../role-based-access-control/role-assignments-portal.md) to manage subscriptions and its resources lose their access. Only the user in the new account who accepts your transfer request will have access to manage the resources. The new owner must manually add these users to the subscription to provide access to the user who lost it. For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
## Transfer Visual Studio and Partner Network subscriptions
Only one transfer request is active at a time. A transfer request is valid for 1
To cancel a transfer request: 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Navigate to **Subscriptions** > Select the subscription that you sent a transfer request for then select **Transfer billing ownership**.
+1. Navigate to **Subscriptions** > Select the subscription that you sent a transfer request for, then select **Transfer billing ownership**.
1. At the bottom of the page, select **Cancel the transfer request**. :::image type="content" source="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" alt-text="Example showing the Transfer billing ownership window with the Cancel the transfer request option" lightbox="./media/billing-subscription-transfer/transfer-billing-owership-cancel-request.png" :::
Use the following troubleshooting information if you're having trouble transferr
> [!Note] > This section specifically applies to a billing account for a Microsoft Customer Agreement. Check if you have access to a [Microsoft Customer Agreement](mca-request-billing-ownership.md#check-for-access).
-It's possible that the original billing account owner who created an Azure account and an Azure subscription leaves your organization. If that situation happens, then their user identity is no longer in the organization's Azure Active Directory. Then the Azure subscription doesn't have a billing owner. This situation prevents anyone from performing billing operations to the account, including viewing, and paying bills. The subscription could go into a past-due state. Eventually the subscription could get disabled because of non-payment. Ultimately, the subscription could get deleted and it would affect every service that runs on the subscription.
+It's possible that the original billing account owner who created an Azure account and an Azure subscription leaves your organization. If that situation happens, then their user identity is no longer in the organization's Azure Active Directory. Then the Azure subscription doesn't have a billing owner. This situation prevents anyone from performing billing operations to the account, including viewing and paying bills. The subscription could go into a past-due state. Eventually, the subscription could get disabled because of non-payment. Ultimately, the subscription could get deleted, affecting every service that runs on the subscription.
When a subscription no longer has a valid billing account owner, Azure sends an email to other Billing account owners, Service Administrators (if any), Co-Administrators (if any), and Subscription Owners informing them of the situation and provides them with a link to accept billing ownership of the subscription. Any one of the users can select the link to accept billing ownership. For more information about billing roles, see [Billing Roles](understand-mca-roles.md) and [Classic Roles and Azure RBAC Roles](../../role-based-access-control/rbac-and-directory-admin-roles.md).
The self-service subscription transfer isn't available for your billing account.
### Not all subscription types can transfer
-Not all types of subscriptions support billing ownership transfer. To view list of subscription types that support transfers, see [Azure subscription transfer hub](subscription-transfer.md).
+Not all types of subscriptions support billing ownership transfer. To view the list of subscription types that support transfers, see [Azure subscription transfer hub](subscription-transfer.md).
### Access denied error shown when trying to transfer subscription billing ownership
If you have questions or need help, [create a support request](https://go.micro
## Next steps -- Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+- Review and update the Service Admin, Co-Admins, and Azure role assignments. To learn more, see [Add or change Azure subscription administrators](add-change-subscription-administrator.md) and [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
cost-management-billing Mca Section Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/mca-section-invoice.md
tags: billing
Previously updated : 01/03/2022 Last updated : 03/03/2022
To create an invoice section, you need to be a **billing profile owner** or a **
To create a billing profile, you need to be a **billing account owner** or a **billing account contributor**. For more information, see [Manage billing profiles for billing account](understand-mca-roles.md#manage-billing-profiles-for-billing-account).
-Adding additional billing profiles is supported only for direct Microsoft Customer Agreements (working with a Microsoft representative). If you don't see the **Add** option on the Billing profile page, the feature isn't available for your account. If you don't have a direct Microsoft Customer Agreement, you can contact the Digital Sales team by chat, phone, or ticket. For more information, see [Contact Azure Sales](https://azure.microsoft.com/overview/contact-azure-sales/#contact-sales).
- > [!IMPORTANT] > > Creating additional billing profiles may impact your overall cost. For more information, see [Things to consider when adding new billing profiles](#things-to-consider-when-adding-new-billing-profiles).
cost-management-billing Understand Ea Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/understand-ea-roles.md
Previously updated : 10/22/2021 Last updated : 03/02/2022
Users with this role have the highest level of access. They can:
You can have multiple enterprise administrators in an enterprise enrollment. You can grant read-only access to enterprise administrators. They all inherit the department administrator role.
+The enterprise administrator role can be assigned to multiple accounts.
+ ### EA purchaser Users with this role have permissions to purchase Azure services, but are not allowed to manage accounts. They can:
Users with this role can:
Each account requires a unique work, school, or Microsoft account. For more information about Azure Enterprise portal administrative roles, see [Understand Azure Enterprise Agreement administrative roles in Azure](understand-ea-roles.md).
+There can be only one account owner per account. However, there can be multiple accounts in an EA enrollment. Each account has a unique account owner.
+ ### Service administrator The service administrator role has permissions to manage services in the Azure portal and assign users to the coadministrator role.
cost-management-billing Reserved Instance Purchase Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reserved-instance-purchase-recommendations.md
Last updated 01/27/2021
# Reservation recommendations
-Azure reserved instance (RI) purchase recommendations are provided through Azure Consumption [Reservation Recommendation API](/rest/api/consumption/reservationrecommendations), [Azure Advisor](../../advisor/advisor-cost-recommendations.md#buy-reserved-virtual-machine-instances-to-save-money-over-pay-as-you-go-costs), and through the reservation purchase experience in the Azure portal.
+Azure reserved instance (RI) purchase recommendations are provided through Azure Consumption [Reservation Recommendation API](/rest/api/consumption/reservationrecommendations), [Azure Advisor](../../advisor/advisor-reference-cost-recommendations.md#reserved-instances), and through the reservation purchase experience in the Azure portal.
The following steps define how recommendations are calculated:
data-factory Control Flow Web Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-web-activity.md
Property | Description | Allowed values | Required
-- | -- | -- | -- name | Name of the web activity | String | Yes type | Must be set to **WebActivity**. | String | Yes
-method | Rest API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT" | Yes
+method | REST API method for the target endpoint. | String. <br/><br/>Supported Types: "GET", "POST", "PUT" | Yes
url | Target endpoint and path | String (or expression with resultType of string). The activity will timeout at 1 minute with an error if it does not receive a response from the endpoint. | Yes headers | Headers that are sent to the request. For example, to set the language and type on a request: `"headers" : { "Accept-Language": "en-us", "Content-Type": "application/json" }`. | String (or expression with resultType of string) | Yes, Content-type header is required. `"headers":{ "Content-Type":"application/json"}` body | Represents the payload that is sent to the endpoint. | String (or expression with resultType of string). <br/><br/>See the schema of the request payload in [Request payload schema](#request-payload-schema) section. | Required for POST/PUT methods.
data-factory Parameterize Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/parameterize-linked-services.md
Previously updated : 01/17/2022 Last updated : 03/03/2022
All the linked service types are supported for parameterization.
- Azure Database for MySQL - Azure Databricks - Azure File Storage
+- Azure Function
- Azure Key Vault - Azure SQL Database - Azure SQL Managed Instance
All the linked service types are supported for parameterization.
- Generic HTTP - Generic REST - MySQL
+- OData
- Oracle - Oracle Cloud Storage - Salesforce - Salesforce Service Cloud - SFTP
+- SharePoint Online List
- SQL Server **Advanced authoring:** For other linked service types that are not in above list, you can parameterize the linked service by editing the JSON on UI:
data-lake-store Data Lake Store Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/data-lake-store-access-control.md
No, but Default ACLs can be used to set ACLs for child files and folder newly cr
* [POSIX Access Control Lists on Linux](https://www.linux.com/news/posix-acls-linux) * [HDFS permission guide](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html) * [POSIX FAQ](https://www.opengroup.org/austin/papers/posix_faq.html)
-* [POSIX 1003.1 2008](https://standards.ieee.org/findstds/standard/1003.1-2008.html)
+* [POSIX 1003.1 2008](https://standards.ieee.org/wp-content/uploads/import/documents/interpretations/1003.1-2008_interp.pdf)
* [POSIX 1003.1 2013](https://pubs.opengroup.org/onlinepubs/9699919799.2013edition/) * [POSIX 1003.1 2016](https://pubs.opengroup.org/onlinepubs/9699919799.2016edition/) * [POSIX ACL on Ubuntu](https://help.ubuntu.com/community/FilePermissionsACLs)
databox-online Azure Stack Edge Gpu 2202 Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-2202-release-notes.md
Previously updated : 02/14/2022 Last updated : 02/28/2022
This article applies to the **Azure Stack Edge 2202** release, which maps to sof
The 2202 release has the following features and enhancements:
+- **Introduction of Azure Stack Edge Pro 2** - This release introduces Azure Stack Edge Pro 2, a new generation of an AI-enabled edge computing device offered as a service from Microsoft. For more information, see [What is Azure Stack Edge Pro 2?](azure-stack-edge-pro-2-overview.md)
+ - **Clustering support** - This release introduces clustering support for Azure Stack Edge. You can now deploy a two-node device cluster in addition to a single node device. The clustering feature is in preview and is available only for the Azure Stack Edge Pro GPU devices. For more information, see [What is clustering on Azure Stack Edge?](azure-stack-edge-gpu-clustering-overview.md).
The following table provides a summary of known issues in this release.
| | | | | |**1.**|Preview features |For this release, the following features are available in preview: <ul><li>Clustering and Multi-Access Edge Computing (MEC) for Azure Stack Edge Pro GPU devices only. </li><li>VPN for Azure Stack Edge Pro R and Azure Stack Edge Mini R only.</li><li>Local Azure Resource Manager, VMs, Cloud management of VMs, Kubernetes cloud management, and Multi-process service (MPS) for Azure Stack Edge Pro GPU, Azure Stack Edge Pro R, and Azure Stack Edge Mini R.</li></ul> |These features will be generally available in later releases. | |**2.**|Update |For a two-node cluster, in rare instances the update may fail. | If the update fails and you see a message indicating that updates are available, retry updating your device. If the update fails and no updates are available, and your device continues to be in maintenance mode, contact Microsoft Support to determine next steps. |
+|**3.**|Wi-Fi |Wi-Fi does not work on Azure Stack Edge Pro 2 in this release. | This functionality will be available in a future release. |
+|**4.**|VPN |VPN feature shows up in the local web UI but this feature is not supported for this device. | This issue will be addressed in a future release. |
## Known issues from previous releases
databox-online Azure Stack Edge Gpu Cluster Failover Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-cluster-failover-scenarios.md
These tables summarize the failure scenarios for a physical hardware component a
| Node A | Node B | Cluster survives | Failover | Details | |-|-||-||
-| I PSU fails | No failures | Yes | No | Another power supply failure on node A will result in failover to node B. |
+| 1 PSU fails | No failures | Yes | No | Another power supply failure on node A will result in failover to node B. |
| 1 PSU fails | 1 PSU fails | Yes | No | Another power supply failure on either node will result in failover. | | 2 PSUs fail | No failures | Yes | Yes | VMs on node A fail over to node B. | | 2 PSUs fail (TBC) | 1 PSU fails | Yes | Yes | VMs on node A fail over to node B. |
databox-online Azure Stack Edge Gpu Cluster Witness Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-cluster-witness-overview.md
Previously updated : 02/15/2022 Last updated : 02/25/2022 # Cluster witness on your Azure Stack Edge Pro GPU device + This article provides a brief overview of cluster witness on your Azure Stack Edge device including cluster witness requirements, setup, and management. ## About cluster quorum and witness
databox-online Azure Stack Edge Gpu Clustering Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-clustering-overview.md
Previously updated : 02/15/2022 Last updated : 02/22/2022 # Clustering on your Azure Stack Edge Pro GPU device
-This article provides a brief overview of clustering on your Azure Stack Edge device.
+
+This article provides a brief overview of clustering on your Azure Stack Edge device.
## About failover clustering
The infrastructure cluster on your device provides persistent storage and is sho
## Supported networking topologies
-On your Azure Stack Edge device node:
+Based on the use-case and workloads, you can select how the two Azure Stack Edge device nodes will be connected. The networking topologies available will differ depending on whether you use an Azure Stack Edge Pro GPU device or an Azure Stack Edge Pro 2 device.
+
+The supported network topologies for each of the device types are described here.
+
+### [Azure Stack Edge Pro GPU](#tab/1)
+
+On your Azure Stack Edge Pro GPU device node:
- Port 2 is used for management traffic. - Port 3 and Port 4 are used for storage and cluster traffic. This traffic includes that needed for storage mirroring and Azure Stack Edge cluster heartbeat traffic that is required for the cluster to be online.
-Based on the use-case and workloads, you can select how the two Azure Stack Edge nodes will be connected. The following networking topologies are available:
+The following network topologies are available:
![Available network topologies](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-network-topologies.png)
Based on the use-case and workloads, you can select how the two Azure Stack Edge
For more information, see how to [Choose a network topology for your device node](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-network).
+### [Azure Stack Edge Pro 2](#tab/2)
+
+On your Azure Stack Edge Pro 2 device node:
+
+- Port 1 is used for initial configuration. Port 1 is then reconfigured and assigned an IP address that may or may not be in the same subnet as the Port 2. Port 1 and Port 2 are used for clustering, storage and management traffic.
+- Port 3 and Port 4 may be used for are used for Private Multi-Access Edge Computing workload deployment or for storage traffic.
+
+The following network topologies are available:
+
+- **Switchless** - Use this option when you don't have high speed switches available in the environment for storage and cluster traffic. There are further sub-options:
+
+ - **With Port 1 and Port 2 in separate subnets** - This is the default option. In this case, Port 1 and Port 2 have separate virtual switches and are connected to separate subnets.
+
+ - **With Port 1 and Port 2 in same subnets** - In this case, Port 1 and Port 2 have a teamed virtual switch and both the ports are in the same subnet.
-## Cluster deployment
+ In each case, Port 3 and Port 4 are connected back-to-back directly without a switch. These ports are dedicated to storage and Azure Stack Edge cluster traffic and aren't available for workload traffic.
++
+- **Using external switches** - Use this option when you have high speed switches (10 GbE switches) available for use with your device nodes for storage and cluster traffic. There are further sub-options:
+
+ - **With Port 1 and Port 2 in separate subnets** - This is the default option. In this case, Port 1 and Port 2 have separate virtual switches and are connected to separate subnets.
+
+ - **With Port 1 and Port 2 in same subnets** - In this case, Port 1 and Port 2 have a teamed virtual switch and both the ports are in the same subnet.
+
+ In each case, Port 3 and Port 4 are reserved for Private Multi-Access Edge Computing workload deployments.
+
+The pros and cons for each of the above supported topologies can be summarized as follows:
+
+| Local web UI option | Advantages | Disadvantages |
+|-|--|--|
+| Switchless, Port 1 and Port 2 in separate subnet, separate virtual switches | Redundant paths for management and storage traffic. | Clients need to reconnect if Port 1 or Port 2 fails. |
+| | No single point of failure within the device. | |
+| | Lots of bandwidth for storage and cluster traffic across the nodes. | |
+| | Can be deployed with Port 1 and Port 2 in different subnets. | |
+| | | |
+| Switchless, Port 1 and Port 2 in the same subnet, teamed virtual switch | Redundant paths for management and storage traffic. | Teamed virtual switch is a single point of failure in the software. |
+| | Lots of bandwidth for storage and cluster traffic across the nodes. | |
+| | Higher fault tolerance. | |
+| | | |
+| Using external switch, Port 1 and Port 2 in separate subnet, separate virtual switches | Two independent virtual switches and network paths provide redundancy. | Clients need to reconnect if Port 1 or Port 2 fails. |
+| | No single point of failure with the device. | |
+| | Port 1 and Port 2 can be connected to different subnets. | |
+| | | |
+| Using external switch, Port 1 and Port 2 in same subnet, teamed virtual switch | Load balancing. | Teamed switch is a single point of failure in software. |
+| | Higher fault toelerance. | Can't be deployed in an environment with different subnets. |
+| | Two independent, redundant paths between the nodes. | |
+| | Clients do not need to reconnect. | |
++++
+## Cluster deployment
+
+### [Azure Stack Edge Pro GPU](#tab/1)
Before you configure clustering on your device, you must cable the devices as per one of the supported network topologies that you intend to configure. To deploy a two-node infrastructure cluster on your Azure Stack Edge devices, follow these high-level steps:
-![Azure Stack Edge clustering deployment](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-clustering-deployment-1.png)
+![Figure showing the steps in the deployment of a two-node Azure Stack Edge](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-clustering-deployment-1.png)
1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource). 1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
Before you configure clustering on your device, you must cable the devices as pe
For more information, see the two-node device deployment tutorials starting with [Get deployment configuration checklist](azure-stack-edge-gpu-deploy-checklist.md). +
+### [Azure Stack Edge Pro 2](#tab/2)
+
+Before you configure clustering on your device, you must cable the devices as per one of the supported network topologies that you intend to configure. To deploy a two-node infrastructure cluster on your Azure Stack Edge devices, follow these high-level steps:
+
+![Figure showing the steps in the deployment of a two-node Azure Stack Edge](media/azure-stack-edge-gpu-clustering-overview/azure-stack-edge-clustering-deployment-1.png)
+
+1. Order two independent Azure Stack Edge devices. For more information, see [Order an Azure Stack Edge device](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource).
+1. Cable each node independently as you would for a single node device. Based on the workloads that you intend to deploy, cross connect the network interfaces on these devices via cables, and with or without switches. For detailed instructions, see [Cable your two-node cluster device](azure-stack-edge-pro-2-deploy-install.md#cable-the-device).
+1. Start cluster creation on the first node. Choose the network topology that conforms to the cabling across the two nodes. The chosen topology would dictate the storage and clustering traffic between the nodes. See detailed steps in [Configure network and web proxy on your device](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md).
+1. Prepare the second node. Configure the network on the second node the same way you configured it on the first node. Get the authentication token on this node.
+1. Use the authentication token from the prepared node and join this node to the first node to form a cluster.
+1. Set up a cloud witness using an Azure Storage account or a local witness on an SMB fileshare.
+1. Assign a virtual IP to provide an endpoint for Azure Consistent Services or when using NFS.
+1. Assign compute or management intents to the virtual switches created on the network interfaces. You may also configure Kubernetes node IPs and Kubernetes service IPs here for the network interface enabled for compute.
+1. Optionally configure web proxy, set up device settings, configure certificates and then finally, activate the device.
+
+For more information, see the two-node device deployment tutorials starting with [Get deployment configuration checklist](azure-stack-edge-pro-2-deploy-checklist.md).
+++ ## Clustering workloads On your two-node cluster, you can deploy non-containerized workloads or containerized workloads.
databox-online Azure Stack Edge Gpu Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md
zone_pivot_groups: azure-stack-edge-device-deployment
# Deployment checklist for your Azure Stack Edge Pro GPU device
-This article describes the information that can be gathered ahead of the actual deployment of your Azure Stack Edge Pro device.
+This article describes the information that can be gathered ahead of the actual deployment of your Azure Stack Edge Pro GPU device.
Use the following checklist to ensure you have this information after youΓÇÖve placed an order for an Azure Stack Edge Pro device and before youΓÇÖve received the device.
databox-online Azure Stack Edge Pro 2 Deploy Activate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-activate.md
+
+ Title: Tutorial to activate Azure Stack Edge Pro 2 device using the local web UI.
+description: Tutorial to deploy Azure Stack Edge Pro 2 instructs you to activate your physical device.
++++++ Last updated : 03/03/2022+
+# Customer intent: As an IT admin, I need to understand how to activate Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
+
+# Tutorial: Activate Azure Stack Edge Pro 2
+
+This tutorial describes how you can activate your Azure Stack Edge Pro 2 device by using the local web UI.
+
+The activation process can take around 5 minutes to complete.
+
+In this tutorial, you learned about:
+
+> [!div class="checklist"]
+> * Prerequisites
+> * Activate the physical device
+
+## Prerequisites
+
+Before you configure and set up your Azure Stack Edge Pro 2, make sure that:
+
+* For your physical device:
+
+ - You've installed the physical device as detailed in [Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md).
+ - You've configured the network and compute network settings as detailed in [Configure network, compute network, web proxy](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md)
+ - You've uploaded your own or generated the device certificates on your device if you changed the device name or the DNS domain via the **Device** page. If you haven't done this step, you'll see an error during the device activation and the activation will be blocked. For more information, go to [Configure certificates](azure-stack-edge-pro-2-deploy-configure-certificates.md).
+
+* You have the activation key from the Azure Stack Edge service that you created to manage the Azure Stack Edge Pro 2 device. For more information, go to [Prepare to deploy Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md).
++
+## Activate the device
+
+1. In the local web UI of the device, go to **Get started** page.
+2. On the **Activation** tile, select **Activate**.
+
+ ![Screenshot of local web UI with "Activate" highlighted in the Activation tile.](./media/azure-stack-edge-pro-2-deploy-activate/activate-1.png)
+
+3. In the **Activate** pane, enter the **Activation key** that you got in [Get the activation key for Azure Stack Edge Pro](azure-stack-edge-gpu-deploy-prep.md#get-the-activation-key).
+
+4. Select **Activate**.
+
+ ![Screenshot of local web UI with "Activate" highlighted in the Activate blade.](./media/azure-stack-edge-pro-2-deploy-activate/activate-2.png)
++
+5. First the device is activated. You're then prompted to download the key file.
+
+ ![Screenshot of local web UI with Download and continue highlighted on the Device activated dialog.](./media/azure-stack-edge-pro-2-deploy-activate/activate-3.png)
+
+ Select **Download and continue** and save the *device-serial-no.json* file in a safe location outside of the device. **This key file contains the recovery keys for the OS disk and data disks on your device**. These keys may be needed to facilitate a future system recovery.
+
+ Here are the contents of the *json* file:
+
+ ```json
+ {
+ "Id": "<Device ID>",
+ "DataVolumeBitLockerExternalKeys": {
+ "hcsinternal": "<BitLocker key for data disk>",
+ "hcsdata": "<BitLocker key for data disk>"
+ },
+ "SystemVolumeBitLockerRecoveryKey": "<BitLocker key for system volume>",
+ "SEDEncryptionExternalKey": "<Encryption-at-rest key for encrypted disks>",
+ "ServiceEncryptionKey": "<Azure service encryption key>"
+ }
+ ```
+
+ The following table explains the various keys:
+
+ |Field |Description |
+ |||
+ |`Id` | This is the ID for the device. |
+ |`DataVolumeBitLockerExternalKeys`| These are the BitLocker keys for the data disks and are used to recover the local data on your device.|
+ |`SystemVolumeBitLockerRecoveryKey`| This is the BitLocker key for the system volume. This key helps with the recovery of the system configuration and system data for your device. |
+ |`SEDEncryptionExternalKey`| This user provided or system generated key is used to protect the self-encrypting data drives that have a built-in encryption. |
+ |`ServiceEncryptionKey`| This key protects the data flowing through the Azure service. This key ensures that a compromise of the Azure service won't result in a compromise of stored information. |
+
+6. Go to the **Overview** page. The device state should show as **Activated**.
+
+ ![Screenshot of local web UI "Overview" page with State highlighted.](./media/azure-stack-edge-pro-2-deploy-activate/activate-4.png)
+
+The device activation is complete. You can now add shares on your device.
+
+If you encounter any issues during activation, go to [Troubleshoot activation and Azure Key Vault errors](azure-stack-edge-gpu-troubleshoot-activation.md#activation-errors).
+++
+## Deploy workloads
+
+After you've activated the device, the next step is to deploy workloads.
+
+- To deploy VM workloads, see [What are VMs on Azure Stack Edge?](azure-stack-edge-gpu-virtual-machine-overview.md) and the associated VM deployment documentation.
+- To deploy network functions as managed applications:
+ - Make sure that you create a Device resource for Azure Network Function Manager (NFM) that is linked to the Azure Stack Edge resource. The device resource aggregates all the network functions deployed on Azure Stack Edge device. For detailed instructions, see [Tutorial: Create a Network Function Manager Device resource (Preview)](../network-function-manager/create-device.md).
+ - You can then deploy Network Function Manager as per the instructions in [Tutorial: Deploy network functions on Azure Stack Edge (Preview)](../network-function-manager/deploy-functions.md).
+- To deploy IoT Edge and Kubernetes workloads:
+ - You'll need to first configure compute as described in [Tutorial: Configure compute on Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-deploy-configure-compute.md). This step creates a Kubernetes cluster that acts as the hosting platform for IoT Edge on your device.
+ - After a Kubernetes cluster is created on your Azure Stack Edge device, you can deploy application workloads on this cluster via any of the following methods:
+
+ - Native access via `kubectl`
+ - IoT Edge
+ - Azure Arc
+
+ For more information on workload deployment, see [Kubernetes workload management on your Azure Stack Edge device](azure-stack-edge-gpu-kubernetes-workload-management.md).
+
+## Next steps
+
+In this tutorial, you learned about:
+
+> [!div class="checklist"]
+> * Prerequisites
+> * Activate the physical device
+
+To learn how to deploy workloads on your Azure Stack Edge device, see:
+
+> [!div class="nextstepaction"]
+> [Configure compute to deploy IoT Edge and Kubernetes workloads on Azure Stack Edge](./azure-stack-edge-gpu-deploy-configure-compute.md)
databox-online Azure Stack Edge Pro 2 Deploy Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-checklist.md
+
+ Title: Predeployment checklist to deploy Azure Stack Edge Pro 2 device | Microsoft Docs
+description: This article describes the information that can be gathered before you deploy your Azure Stack Edge Pro 2 device.
++++++ Last updated : 02/18/2022+
+zone_pivot_groups: azure-stack-edge-device-deployment
+
+# Deployment checklist for your Azure Stack Edge Pro 2 device
+
+This article describes the information that can be gathered ahead of the actual deployment of your Azure Stack Edge Pro 2 device.
+
+Use the following checklist to ensure you have this information after youΓÇÖve placed an order for an Azure Stack Edge Pro 2 device and before youΓÇÖve received the device.
+
+## Deployment checklist
++
+| Stage | Parameter | Details |
+|--|-|--|
+| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
+| Device installation | One power cable in the package. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
+| | <li>At least one X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. </li><li>At least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.</li>| Customer needs to procure these cables.<br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
+| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
+| Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules) are allowed for device IPs. | |
+| (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
+| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
+| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
+| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
+| Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
+++
+| Stage | Parameter | Details |
+|--|-|--|
+| Device management | <li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li>|<li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials</li> |
+| Device installation | One power cable in the package. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |
+| | <li>At least two 1-GbE RJ-45 network cable for Port 1 on the two device nodes </li><li> You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. </li><li> You would also need at least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.</li>| Customer needs to procure these cables and switches.<br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).|
+| First-time device connection | <li>Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| |
+| Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. |
+| Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <li>Port 1 is used to configure management settings only. One or more data ports can be connected and configured. </li><li> At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure).</li><li> DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. |
+| Advanced networking settings | <li>Require 2 free, static, contiguous IPs for Kubernetes nodes, and one static IP for IoT Edge service.</li><li>Require one additional IP for each extra service or module that you'll deploy.</li>| Only static IPv4 configuration is supported.|
+| (Optional) Web proxy settings | <li>Web proxy server IP/FQDN, port </li><li>Web proxy username, password</li> | |
+| Firewall and port settings | If using firewall, make sure the [listed URLs patterns and ports](azure-stack-edge-pro-2-system-requirements.md#url-patterns-for-firewall-rules) are allowed for device IPs. | |
+| (Recommended) Time settings | Configure time zone, primary NTP server, secondary NTP server. | Configure primary and secondary NTP server on local network.<br>If local server isnΓÇÖt available, public NTP servers can be configured. |
+| (Optional) Update server settings | <li>Require update server IP address on local network, path to WSUS server. </li> | By default, public windows update server is used.|
+| Device settings | <li>Device fully qualified domain name (FQDN) </li><li>DNS domain</li> | |
+| (Optional) Certificates | To test non-production workloads, use [Generate certificates option](azure-stack-edge-gpu-deploy-configure-certificates.md#generate-device-certificates) <br><br> If you bring your own certificates including the signing chain(s), [Add certificates](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates) in appropriate format.| Configure certificates only if you change the device name and/or DNS domain. |
+| Activation | Require activation key from the Azure Stack Edge resource. | Once generated, the key expires in three days. |
++
+## Next steps
+
+Prepare to deploy your [Azure Stack Edge Pro device](azure-stack-edge-pro-2-deploy-prep.md).
databox-online Azure Stack Edge Pro 2 Deploy Configure Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-certificates.md
+
+ Title: Tutorial to configure certificates for Azure Stack Edge Pro 2 device via the local web UI
+description: Tutorial to deploy Azure Stack Edge Pro 2 instructs you to configure certificates on your physical device.
++++++ Last updated : 03/02/2022+
+# Customer intent: As an IT admin, I need to understand how to configure certificates for Azure Stack Edge Pro 2 so I can use it to establish a trust relationship between the device and the clients accessing the device.
+
+# Tutorial: Configure certificates for your Azure Stack Edge Pro 2
+
+This tutorial describes how you can configure certificates for your Azure Stack Edge Pro 2 by using the local web UI.
+
+The time taken for this step can vary depending on the specific option you choose and how the certificate flow is established in your environment.
+
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Configure certificates for the physical device
+> * Configure encryption-at-rest
+
+## Prerequisites
+
+Before you configure and set up your Azure Stack Edge Pro 2 device, make sure that:
+
+* You've installed the physical device as detailed in [Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md).
+* If you plan to bring your own certificates:
+
+ - You should have your certificates ready in the appropriate format including the signing chain certificate.
+ - If your device is deployed in Azure Government and not deployed in Azure public cloud, a signing chain certificate is required before you can activate your device.
+
+ For details on certificates, go to [Prepare certificates to upload on your Azure Stack Edge device](azure-stack-edge-gpu-prepare-certificates-device-upload.md).
++
+## Configure certificates for device
+
+1. Open the **Certificates** page in the local web UI of your device. This page will display the certificates available on your device. The device is shipped with self-signed certificates, also referred to as the device certificates. You can also bring your own certificates.
+
+1. *Follow this step only if you didn't change the device name or DNS domain when you [configured device settings earlier](azure-stack-edge-gpu-deploy-set-up-device-update-time.md#configure-device-settings), and you don't want to use your own certificates.*
+
+ You don't need to perform any configuration on this page. You just need to verify that the status of all the certificates shows as valid on this page.
+
+ ![Screenshot of the Certificates page in the local web UI of Azure Stack Edge. The Certificates menu item is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/verify-certificate-status-1.png)
+
+ You're ready to configure [Encryption-at-rest](#configure-encryption-at-rest) with the existing device certificates.
+
+1. *Follow the remaining steps only if you've changed the device name or the DNS domain for your device.* In these instances, the status of your device certificates will be **Not valid**. That's because the device name and DNS domain in the certificates' `subject name` and `subject alternative` settings are out of date.
+
+ You can select a certificate to view status details.
+
+ ![Screenshot of Certificate Details for a certificate on the Certificates page in the local web UI of an Azure Stack Edge device. The selected certificate and certificate details are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-1.png)
+
+1. If you've changed the device name or DNS domain of your device, and you don't provide new certificates, **activation of the device will be blocked**.To use a new set of certificates on your device, choose one of the following options:
+
+ - **Generate all the device certificates**. Select this option, and then complete the steps in [Generate device certificates](#generate-device-certificates), if you plan to use automatically generated device certificates and need to generate new device certificates. You should only use these device certificates for testing, not with production workloads.
+
+ - **Bring your own certificates**. Select this option, and then do the steps in [Bring your own certificates](#bring-your-own-certificates), if you want to use your own signed endpoint certificates and the corresponding signing chains. **We recommend that you always bring your own certificates for production workloads.**
+
+ - You can choose to bring some of your own certificates and generate some device certificates. The **Generate all the device certificates** option only regenerates the device certificates.
+
+
+1. When you have a full set of valid certificates for your device, select **< Back to Get started**. You can now proceed to configure [Encryption-at-rest](#configure-encryption-at-rest).
+
+ <!--![Screenshot of the Certificates page on an Azure Stack Edge device with a full set of valid certificates. The certificate states and the Back To Get Started button are highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/proceed-to-activate-1.png)-->
++
+### Generate device certificates
+
+Follow these steps to generate device certificates.
+
+Use these steps to regenerate and download the Azure Stack Edge Pro 2 device certificates:
+
+1. In the local UI of your device, go to **Configuration > Certificates**. Select **Generate certificates**.
+
+ ![Screenshot of the Certificates page in the local web UI of an Azure Stack Edge device. The Generate Certificates button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-3.png)
+
+2. In the **Generate device certificates**, select **Generate**.
+
+ ![Screenshot of the Generate Certificates pane for an Azure Stack Edge device. The Generate button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-4.png)
+
+ The device certificates are now generated and applied. It takes a few minutes to generate and apply the certificates.
+
+ > [!IMPORTANT]
+ > While the certificate generation operation is in progress, do not bring your own certificates and try to add those via the **+ Add certificate** option.
+
+ You're notified when the operation is successfully completed. **To avoid any potential cache issues, restart your browser.**
+
+ ![Screenshot showing the notification that certificates were successfully generated on an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-5.png)
+
+3. After the certificates are generated:
+
+ - Make sure that the status of all the certificates is shown as **Valid**.
+
+ ![Screenshot of newly generated certificates on the Certificates page of an Azure Stack Edge device. Certificates with Valid state are highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/generate-certificate-6.png)
+
+ - You can select a specific certificate name, and view the certificate details.
+
+ ![Screenshot of Local web UI certificate details highlighted on the Certificates page of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-7.png)
+
+ - The **Download** column is now populated. This column has links to download the regenerated certificates.
+
+ ![Screenshot of the Certificates page on an Azure Stack Edge device. The download links for generated certificates are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-8.png)
++
+4. Select the download link for a certificate and when prompted, save the certificate.
+
+ ![Screenshot of the Certificates page on an Azure Stack Edge device. A download link has been selected. The link and the download options are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-9.png)
+
+5. Repeat this process for all the certificates that you wish to download.
+
+ ![Screenshot showing downloaded certificates in Windows File Explorer. Certificates for an Azure Stack Edge device are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/generate-certificate-10.png)
+
+ The device generated certificates are saved as DER certificates with the following name format:
+
+ `<Device name>_<Endpoint name>.cer`. These certificates contain the public key for the corresponding certificates installed on the device.
+
+You'll need to install these certificates on the client system that you're using to access the endpoints on the Azure Stack Edge device. These certificates establish trust between the client and the device.
+
+To import and install these certificates on the client that you're using to access the device, follow the steps in [Import certificates on the clients accessing your Azure Stack Edge Pro GPU device](azure-stack-edge-gpu-manage-certificates.md#import-certificates-on-the-client-accessing-the-device).
+
+If using Azure Storage Explorer, you'll need to install certificates on your client in PEM format and you'll need to convert the device generated certificates into PEM format.
+
+> [!IMPORTANT]
+> - The download link is only available for the device generated certificates and not if you bring your own certificates.
+> - You can decide to have a mix of device generated certificates and bring your own certificates as long as other certificate requirements are met. For more information, go to [Certificate requirements](azure-stack-edge-gpu-certificate-requirements.md).
+
+
+### Bring your own certificates
+
+You can bring your own certificates.
+
+- Start by understanding the [Types of certificates that can be used with your Azure Stack Edge device](azure-stack-edge-gpu-certificates-overview.md).
+- Next, review the [Certificate requirements for each type of certificate](azure-stack-edge-gpu-certificate-requirements.md).
+- You can then [Create your certificates via Azure PowerShell](azure-stack-edge-gpu-create-certificates-powershell.md) or [Create your certificates via Readiness Checker tool](azure-stack-edge-gpu-create-certificates-tool.md).
+- Finally, [Convert the certificates to appropriate format](azure-stack-edge-gpu-prepare-certificates-device-upload.md) so that they're ready to upload on to your device.
+
+Follow these steps to upload your own certificates including the signing chain.
+
+1. To upload certificate, on the **Certificate** page, select **+ Add certificate**.
+
+ ![Screenshot of the Add Certificate pane in the local web UI of an Azure Stack Edge device. The Certificates menu item, Plus Add Certificate button, and Add Certificate pane are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-1.png)
+
+2. You can skip this step if you included all certificates in the certificate path when you [exported certificates in .pfx format](azure-stack-edge-gpu-prepare-certificates-device-upload.md#export-certificates-as-pfx-format-with-private-key). If you didn't include all certificates in your export, upload the signing chain, and then select **Validate & add**. You need to do this before you upload your other certificates.
+
+ In some cases, you may want to bring a signing chain alone for other purposes - for example, to connect to your update server for Windows Server Update Services (WSUS).
+
+ ![Screenshot of the Add Certificate pane for a Signing Chain certificate in the local web UI of an Azure Stack Edge device. The certificate type, certificate entries, and Validate And Add button are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-2.png)
+
+3. Upload other certificates. For example, you can upload the Azure Resource Manager and Blob storage endpoint certificates.
+
+ ![Screenshot of the Add Certificate pane for endpoints for an Azure Stack Edge device. The certificate type and certificate entries are highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-3.png)
+
+ You can also upload the local web UI certificate. After you upload this certificate, you'll be required to start your browser and clear the cache. You'll then need to connect to the device local web UI.
+
+ ![Local web UI "Certificates" page 7](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-4.png)
+
+ You can also upload the node certificate.
+
+ ![Screenshot of the Add Certificate pane for the Local Web UI certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/add-certificate-5.png)
+
+ At any time, you can select a certificate and view the details to ensure that these match with the certificate that you uploaded.
+
+ ![Screenshot of the Add Certificate pane for a node certificate for an Azure Stack Edge device. The certificate type and certificate entries highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-6.png)
+
+ The certificate page should update to reflect the newly added certificates.
+
+ ![Screenshot of the Certificates page in the local web UI for an Azure Stack Edge device. A newly added set of certificates is highlighted.](./media/azure-stack-edge-gpu-deploy-configure-certificates/add-certificate-7.png)
+
+ > [!NOTE]
+ > Except for Azure public cloud, signing chain certificates are needed to be brought in before activation for all cloud configurations (Azure Government or Azure Stack).
++
+## Configure encryption-at-rest
+
+1. On the **Security** tile, select **Configure** for encryption-at-rest.
+
+ > [!NOTE]
+ > This is a required setting and until this is successfully configured, you can't activate the device.
+
+ At the factory, once the devices are imaged, the volume level BitLocker encryption is enabled. After you receive the device, you need to configure the encryption-at-rest. The storage pool and volumes are recreated and you can provide BitLocker keys to enable encryption-at-rest and thus create a second layer of encryption for your data-at-rest.
+
+1. In the **Encryption-at-rest** pane, provide a 32 character long Base-64 encoded key. This is a one-time configuration and this key is used to protect the actual encryption key. You can choose to automatically generate this key.
+
+ ![Screenshot of the local web UI "Encryption at rest" pane wit system generated key.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/encryption-key-1.png)
+
+ You can also enter your own Base-64 encoded ASE-256 bit encryption key.
+
+ ![Screenshot of the local web UI "Encryption at rest" pane with bring your own key.](./media/azure-stack-edge-pro-2-deploy-configure-certificates/encryption-key-2.png)
+
+ The key is saved in a key file on the **Cloud details** page after the device is activated.
+
+1. Select **Apply**. This operation takes several minutes and the status of operation is displayed.
+
+ ![Screenshot of the "Double encryption at rest" notification. ](./media/azure-stack-edge-pro-2-deploy-configure-certificates/encryption-at-rest-status-1.png)
+
+1. After the status shows as **Completed**, your device is now ready to be activated. Select **< Back to Get started**.
++
+## Next steps
+
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Configure certificates for the physical device
+> * Configure encryption-at-rest
+
+To learn how to activate your Azure Stack Edge Pro GPU device, see:
+
+> [!div class="nextstepaction"]
+> [Activate Azure Stack Edge Pro GPU device](./azure-stack-edge-gpu-deploy-activate.md)
databox-online Azure Stack Edge Pro 2 Deploy Configure Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-compute.md
+
+ Title: Tutorial to filter, analyze data with compute on Azure Stack Edge Pro 2
+description: Learn how to configure compute role on Azure Stack Edge Pro 2 and use it to transform data before sending to Azure.
++++++ Last updated : 02/28/2022+
+# Customer intent: As an IT admin, I need to understand how to configure compute on Azure Stack Edge Pro so I can use it to transform the data before sending it to Azure.
++
+# Tutorial: Configure compute on Azure Stack Edge Pro 2
++
+This tutorial describes how to configure a compute role and create a Kubernetes cluster on your Azure Stack Edge Pro 2 device.
+
+This procedure can take around 20 to 30 minutes to complete.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Configure compute
+> * Get Kubernetes endpoints
+
+
+## Prerequisites
+
+Before you set up a compute role on your Azure Stack Edge Pro device, make sure that:
+
+- You've activated your Azure Stack Edge Pro 2 device as described in [Activate Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-activate.md).
+- Make sure that you've followed the instructions in [Enable compute network](azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy.md#configure-virtual-switches-and-compute-ips) and:
+ - Enabled a network interface for compute.
+ - Assigned Kubernetes node IPs and Kubernetes external service IPs.
+
+## Configure compute
++
+## Get Kubernetes endpoints
+
+To configure a client to access Kubernetes cluster, you will need the Kubernetes endpoint. Follow these steps to get Kubernetes API endpoint from the local UI of your Azure Stack Edge device.
+
+1. In the local web UI of your device, go to **Devices** page.
+2. Under the **Device endpoints**, copy the **Kubernetes API service** endpoint. This endpoint is a string in the following format: `https://compute.<device-name>.<DNS-domain>[Kubernetes-cluster-IP-address]`.
+
+ ![Device page in local UI](./media/azure-stack-edge-gpu-create-kubernetes-cluster/device-kubernetes-endpoint-1.png)
+
+3. Save the endpoint string. You will use this endpoint string later when configuring a client to access the Kubernetes cluster via kubectl.
+
+4. While you are in the local web UI, you can:
+
+ - Go to Kubernetes API, select **advanced settings**, and download an advanced configuration file for Kubernetes.
+
+ ![Device page in local UI 1](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-1.png)
+
+ If you have been provided a key from Microsoft (select users may have a key), then you can use this config file.
+
+ ![Device page in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-compute/download-advanced-config-2.png)
+
+ - You can also go to **Kubernetes dashboard** endpoint and download an `aseuser` config file.
+
+ ![Device page in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-compute/download-aseuser-config-1.png)
+
+ You can use this config file to sign into the Kubernetes dashboard or debug any issues in your Kubernetes cluster. For more information, see [Access Kubernetes dashboard](azure-stack-edge-gpu-monitor-kubernetes-dashboard.md#access-dashboard).
++
+## Next steps
+
+In this tutorial, you learned how to:
+
+> [!div class="checklist"]
+> * Configure compute
+> * Get Kubernetes endpoints
++
+To learn how to administer your Azure Stack Edge Pro 2 device, see:
+
+> [!div class="nextstepaction"]
+> [Use local web UI to administer an Azure Stack Edge Pro 2](azure-stack-edge-manage-access-power-connectivity-mode.md)
databox-online Azure Stack Edge Pro 2 Deploy Configure Network Compute Web Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md
+
+ Title: Tutorial to configure network settings for Azure Stack Edge Pro 2 device
+description: Tutorial to deploy Azure Stack Edge Pro 2 instructs you to configure network, compute network, and web proxy settings for your physical device.
++++++ Last updated : 03/01/2022+
+zone_pivot_groups: azure-stack-edge-device-deployment
+# Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
+
+# Tutorial: Configure network for Azure Stack Edge Pro 2
++
+This tutorial describes how to configure network for your Azure Stack Edge Pro 2 device by using the local web UI.
+
+The connection process can take around 20 minutes to complete.
+++
+This tutorial describes how to configure network for your two-node Azure Stack Edge Pro 2 device by using the local web UI.
+
+The procedure can take around 45 minutes to complete.
++
+In this tutorial, you learn about:
++
+> [!div class="checklist"]
+> * Prerequisites
+> * Configure network
+> * Configure advanced networking
+> * Configure web proxy
+++
+> [!div class="checklist"]
+> * Prerequisites
+> * Select device setup type
+> * Configure network and network topology on both nodes
+> * Get authentication token for prepared node
+> * Configure cluster witness and add prepared node
+> * Configure virtual IP settings for Azure Consistent Services and NFS
+> * Configure advanced networking
+> * Configure web proxy
++
+## Prerequisites
+
+Before you configure and set up your Azure Stack Edge Pro 2 device, make sure that:
+
+* You've installed the physical device as detailed in [Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md).
+* You've connected to the local web UI of the device as detailed in [Connect to Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-connect.md)
++
+## Configure network
+
+Your **Get started** page displays the various settings that are required to configure and register the physical device with the Azure Stack Edge service.
+
+Follow these steps to configure the network for your device.
+
+1. In the local web UI of your device, go to the **Get started** page.
+
+2. On the **Network** tile, select **Configure**.
+
+ ![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-1.png)
+
+ On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces. Port 1 is automatically configured as a management-only port, and Port 2 to Port 4 are all data ports. For a new device, the **Network** page is as shown below.
+
+ ![Screenshot of the Network page in the local web UI of an Azure Stack Edge device whose network isn't configured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-2.png)
+
+3. To change the network settings, select a port and in the right pane that appears, modify the IP address, subnet, gateway, primary DNS, and secondary DNS.
+
+ - If you select Port 1, you can see that it's preconfigured as static.
+
+ ![Screenshot of the Port 1 Network settings in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-3.png)
+
+ - If you select Port 2, Port 3, or Port 4, all of these ports are configured as DHCP by default.
+
+ ![Screenshot of the Port 3 Network settings in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-4.png)
+
+ As you configure the network settings, keep in mind:
+
+ * Port 3 and Port 4 are reserved for Network Function Manager workload deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md).
+ * If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned.
+ * If DHCP isn't enabled, you can assign static IPs if needed.
+ * You can configure your network interface as IPv4.
+ * <!--ENGG TO VERIFY --> Network Interface Card (NIC) Teaming or link aggregation isnΓÇÖt supported with Azure Stack Edge.
+ * <!--ENGG TO VERIFY --> In this release, the 100-GbE interfaces aren't configured for RDMA mode.
+ * Serial number for any port corresponds to the node serial number.
+
+ Once the device network is configured, the page updates as shown below.
+
+ ![Screenshot of the Network page in the local web UI of an Azure Stack Edge device whose network is configured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-5.png)
++
+ > [!NOTE]
+ > We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has activated with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
++
+ After youΓÇÖve configured and applied the network settings, select **Next: Advanced networking** to configure compute network.
+
+## Configure advanced networking
+
+Follow these steps to configure advanced network settings such as creating a switch for compute and associating it with a virtual network.
+
+> [!NOTE]
+> There is no restriction on the number of virtual switches that you can create on your device. However, you can enable compute only on one virtual switch at a time.
+
+1. In the local web UI of your device, go to the **Advanced networking** page. Select **Add virtual switch** to create a new virtual switch or use an existing virtual switch. This virtual switch will be used for the compute infrastructure on the device.
++
+ ![Screenshot of the Advanced networking page in the local web UI of an Azure Stack Edge device. The Add virtual switch button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-1.png)
+
+1. In **Add virtual switch** blade:
+
+ 1. Provide a name for your virtual switch.
+ 1. Associate a network interface on your device with the virtual switch you'll create. You can only have one virtual switch associated with a network interface on your device.
+ 1. Assign an intent for your virtual switch. To deploy compute workloads, you'll select compute as the intent.
+ 1. Assign **Kubernetes node IPs**. These static IP addresses are for the compute VM that will be created on this virtual switch.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+
+ 1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+ 1. Select **Apply**.
+
+ ![Screenshot of the Add virtual switch blade in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-2.png)
+
+1. You'll see a warning to the effect that you may need to wait for a couple minutes and then refresh the browser. Select **OK**.
+
+ ![Screenshot of the Refresh warning in the local web UI of an Azure Stack Edge device. The OK button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-3.png)
++
+1. After the configuration is applied and you've refreshed the browser, you can see that the specified port is enabled for compute.
+
+ ![Screenshot of the Advanced networking page in the local web UI of an Azure Stack Edge device. The newly added virtual switch is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-4.png)
++
+1. Optionally you can create a virtual network and associate it with your virtual switch if you wish to route your traffic. Select **Add virtual network** and then input the following information.
+
+ 1. Select a **Virtual switch** to which you'll add a virtual network.
+ 1. Provide a **Name** for the virtual network.
+ 1. Supply a unique number from 1-4096 as your **VLAN ID**.
+ 1. Enter a **Subnet mask** and a **Gateway** depending on the configuration of your physical network in the environment.
+ 1. Select **Apply**.
+
+ ![Screenshot of the Add virtual network blade in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-5.png)
+
+1. After the configuration is applied, you can see that the specified virtual network is created.
+
+ ![Screenshot of the Advanced networking page in the local web UI of an Azure Stack Edge device. The newly added virtual network is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/advanced-networking-6.png)
+
+ Select **Next: Web proxy** to configure web proxy.
+++
+## Configure setup type
+
+1. In the local UI for one of the devices, go to the **Get started** page.
+1. In the **Set up a 2-node cluster** tile, select **Start**.
+
+ ![Screenshot of the local web UI "Set up a 2-node cluster" on "Get started" page.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/setup-type-two-node-1.png)
+
+1. In the local UI for the second device, go to the **Get started** page.
+1. In the **Prepare a node** tile, select **Start**.
+
+ ![Screenshot of the local web UI "Prepare a node" on "Get started" page.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/setup-type-prepare-node-1.png)
++
+## Configure network, topology
+
+You'll configure network and network topology on both the nodes. These steps can be done in parallel. The cabling on both nodes should be identical and should conform with the network topology you choose.
+
+### Configure network on first node
+
+Follow these steps to configure the network for your device.
+
+1. In the local web UI of your device, go to the **Get started** page.
+
+2. On the **Network** tile, select **Configure**.
+
+ ![Screenshot of the Get started page in the local web UI of an Azure Stack Edge device. The Needs setup is highlighted on the Network tile.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-1.png)
+
+ On your physical device, there are four network interfaces. Port 1 and Port 2 are 1-Gbps network interfaces that can also serve as 10-Gbps network interfaces. Port 3 and Port 4 are 100-Gbps network interfaces. Port 1 is automatically configured as a management-only port, and Port 2 to Port 4 are all data ports. Though Port 6 shows up in the local UI as the Wi-Fi port, the Wi-Fi functionality isn't available in this release.
+
+ For a new device, the **Network** page is as shown below.
+
+ ![Screenshot of the Network page in the local web UI of an Azure Stack Edge device whose network isn't configured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-2.png)
+
+3. To change the network settings, select a port and in the right pane that appears, modify the IP address, subnet, gateway, primary DNS, and secondary DNS.
+
+ - If you select Port 1, you can see that it's preconfigured as static.
+
+ ![Screenshot of the Port 1 Network settings in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-3.png)
+
+ - If you select Port 2, Port 3, or Port 4, all of these ports are configured as DHCP by default.
+
+ ![Screenshot of the Port 3 Network settings in the local web UI of an Azure Stack Edge device.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-4.png)
+
+ As you configure the network settings, keep in mind:
+
+ * Make sure that Port 3 and Port 4 are connected for Network Function Manager deployments. For more information, see [Tutorial: Deploy network functions on Azure Stack Edge](../network-function-manager/deploy-functions.md).
+ * If DHCP is enabled in your environment, network interfaces are automatically configured. An IP address, subnet, gateway, and DNS are automatically assigned.
+ * If DHCP isn't enabled, you can assign static IPs if needed.
+ * You can configure your network interface as IPv4.
+ * <!--ENGG TO VERIFY --> Network Interface Card (NIC) Teaming or link aggregation isnΓÇÖt supported with Azure Stack Edge.
+ * <!--ENGG TO VERIFY --> In this release, the 100-GbE interfaces aren't configured for RDMA mode.
+ * Serial number for any port corresponds to the node serial number.
+ * Though Port 6 shows up in the local UI as the Wi-Fi port, the Wi-Fi functionality isn't available in this release.
+
+ Once the device network is configured, the page updates as shown below.
+
+ ![Screenshot of the Network page in the local web UI of an Azure Stack Edge device whose network is configured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-5.png)
++
+ > [!NOTE]
+ > We recommend that you do not switch the local IP address of the network interface from static to DCHP, unless you have another IP address to connect to the device. If using one network interface and you switch to DHCP, there would be no way to determine the DHCP address. If you want to change to a DHCP address, wait until after the device has activated with the service, and then change. You can then view the IPs of all the adapters in the **Device properties** in the Azure portal for your service.
+
++
+### Reconfigure Port 1 on first node
+
+Based on the network topology you will, choose, you would need to route Port 1 to the internet via a switch and assign it IPs.
+
+Follow these steps to reconfigure Port 1:
+
+1. Disconnect Port 1 from the laptop by removing the connecting cable.
+1. Connect to the local web UI via the IP address of the Port 2 at the following URL:
+
+ `https://<IP address of Port 2>`
+
+1. Sign in to the local web UI by providing the device password.
+1. Connect the Port 1 via an appropriate cable. Use one of the following options corresponding to the supported network topologies.
+
+ - **Switchless**
+
+ ![Cabling diagram with Port 1 reconfigured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/switchless-final-1.png)
++
+ - **Using external switches**
+
+ ![Cabling diagram for Port 1 reconfigured.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/switches-final-1.png)
+
+
+1. Go to the **Network** page for the first node.
+1. Configure IPs for Port 1. Depending on the network topology that you wish to deploy:
+ 1. Assign Port 1 IPs that are in a different subnet as that of Port 2.
+ 1. Assign Port 1 IPs that are in the same subnet as that of Port 2.
+
+1. After Port 1 is configured, select **Next: Advanced networking >** to configure your network topology.
+
+### Configure network topology on first node
+
+1. In the **Advanced networking** page, choose the topology for cluster and the storage traffic between nodes from the following options:
+
+ - **Use external switches, Port 1 and Port 2 in the same subnet**
+ - **Use external switches, Port 1 and Port 2 in different subnet**
+ - **Switchless, Port 1 and Port 2 in the same subnet**
+ - **Switchless, Port 1 and Port 2 in different subnet**
+
+ ![Local web UI "Advanced networking" page with "Use external switches" option selected](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-topology-1.png)
+
+1. Make sure that your node is cabled as per the selected topology.
+1. Select **Apply**.
+1. You'll see a **Confirm network setting** dialog. This dialog reminds you to make sure that your node is cabled as per the network topology you selected. Once you choose the network cluster topology, you can't change this topology without a device reset. Select **Yes** to confirm the network topology.
+
+ ![Local web UI "Confirm network setting" dialog](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/confirm-network-setting-1.png)
+
+ The network topology setting takes a few minutes to apply and you see a notification when the settings are successfully applied.
+
+1. Once the network topology is applied, the **Network** page updates. For example, if you selected network topology that uses external switches and separate virtual switches, you'll see that on the device node, a virtual switch **vSwitch1** is created at Port 1 and another virtual switch, **vSwitch2** is created on Port 2. Port 3 and Port 4 don't have any virtual switches.
+
+ ![Local web UI "Network" page updated](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/network-settings-updated-1.png)
+
+You'll now configure the network and the network topology of the second node.
+
+### Configure network on second node
+
+You'll now prepare the second node for clustering. You'll first need to configure the network. Follow these steps in the local UI of the second node:
+
+1. On the **Prepare a node for clustering** page, in the **Network** tile, select **Needs setup**.
+
+ ![Local web UI "Network" tile on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-2.png)
+
+1. Configure the network on the second node in a similar way that you configured the first node.
+
+### Reconfigure Port 1 on second node
+
+Follow the steps to reconfigure Port 1 on second node as you did on the first node:
+
+1. Disconnect the cable on Port 1. Sign in to the local web UI using Port 2 IP address.
+1. Connect Port 1 via an appropriate cable and a switch on the second node.
+1. Assign IPs to the Port 1 on the second node in the same way as that you did on the first node.
+1. After Port 1 on the second node is configured, select **Next: Advanced networking >**.
+
+### Configure network topology on second node
+
+1. Make sure that the second node is cabled as per the topology you selected for the first node. In the **Advanced networking** page, choose and **Apply** the same topology that you selected for the first node.
+
+ ![Local web UI "Advanced networking" page with "Use external switches and Port 1 and Port 2 not teamed" option selected on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-network-topology-3.png)
+
+1. Select **Back to get started**.
++
+## Get authentication token
+
+You'll now get the authentication token that will be needed when adding this node to form a cluster. Follow these steps in the local UI of the second node:
+
+1. On the **Prepare a node for clustering** page, in the **Get authentication token** tile, select **Prepare node**.
+
+ ![Local web UI "Get authentication token" tile with "Prepare node" option selected on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/select-get-authentication-token-1.png)
+
+1. Select **Get token**.
+1. Copy the node serial number and the authentication token. You'll use this information when you add this node to the cluster on the first node.
+
+ ![Local web UI "Get authentication token" on second node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/get-authentication-token-1.png)
++
+## Configure cluster
+
+To configure the cluster, you'll need to establish a cluster witness and then add a prepared node. You'll also need to configure virtual IP settings so that you can connect to a cluster as opposed to a specific node.
++
+### Configure cluster witness
+
+You'll now create a cluster witness. A cluster witness helps establish quorum for a two-node device if a node goes down. To learn about quorum, see [Understanding quorum](/windows-server/failover-clustering/manage-cluster-quorum#understanding-quorum).
+
+A cluster witness can be:
+
+- **Cloud witness** if you use an Azure Storage account to provide a vote on cluster quorum. A cloud witness uses Azure Blob Storage to read or write a blob file and then uses it to arbitrate in split-brain resolution.
+
+ Use cloud witness when you have internet access. For more information on cloud witness, see [Deploy a cloud witness for Failover cluster](/windows-server/failover-clustering/deploy-cloud-witness).
+
+- **File share witness** if you use a local SMB file share to provide a vote in the cluster quorum. Use a file share witness if all the servers in a cluster have spotty internet connectivity or can't use disk witness as there aren't any shared drives.
+
+ Use file share witness if you're in an IT environment with other machines and file shares. For more information on file share witness, see [Deploy a file share witness for Failover cluster](/windows-server/failover-clustering/file-share-witness).
+
+Before you create a cluster witness, make sure that you've reviewed the cluster witness requirements.
+
+Follow these steps to configure the cluster witness.
+
+#### Configure cloud witness
+
+1. In the local UI of the first node, go to the **Cluster (Preview)** page. Under **Cluster witness type**, select **Modify**.
+
+ ![Local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
+
+1. In the **Modify cluster witness** blade, enter the following inputs.
+ 1. Choose the **Witness type** as **Cloud.**
+ 1. Enter the **Azure Storage account name**.
+ 1. Specify Storage account authentication from Access key or SAS token.
+ 1. If you chose Access key as the authentication mechanism, enter the Access key of the Storage account, Azure Storage container where the witness lives, and the service endpoint.
+ 1. Select **Apply**.
+
+ ![Local web UI "Cluster" page with cloud witness type selected in "Modify cluster witness" blade on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-cloud-1.png)
+
+#### Configure local witness
+
+1. In the local UI of the first node, go to the **Cluster** page. Under **Cluster witness type**, select **Modify**.
+
+ ![Local web UI "Cluster" page with "Modify" option selected for "Cluster witness" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-1.png)
+
+1. In the **Modify cluster witness** blade, enter the following inputs.
+ 1. Choose the **Witness type** as **Local.**
+ 1. Enter the file share path as *//server/fileshare* format.
+ 1. Select **Apply**.
+
+ ![Local web UI "Cluster" page with local witness type selected in "Modify cluster witness" blade on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-cluster-witness-local-1.png)
++
+### Add prepared node to cluster
+
+You'll now add the prepared node to the first node and form the cluster. Before you add the prepared node, make sure the networking on the incoming node is configured in the same way as that of this node where you initiated cluster creation.
+
+1. In the local UI of the first node, go to the **Cluster** page. Under **Existing nodes**, select **Add node**.
+
+ ![Local web UI "Cluster" page with "Add node" option selected for "Existing" on first node](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-1.png)
++
+1. In the **Add node** blade, input the following information for the incoming node:
+
+ 1. Provide the serial number for the incoming node.
+ 1. Enter the authentication token for the incoming node.
+
+1. Select **Validate & add**. This step takes a few minutes.
+
+ ![Local web UI "Add node" page with "Add node" option selected for "Existing" on first node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-2.png)
+
+ You see a notification when the node is successfully validated.
+
+1. The node is now ready to join the cluster. Select **Apply**.
+
+ ![Local web UI "Add node" page with "Apply" option selected for second node.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/add-node-3.png)
+
+1. A dialog pops us indicating that the cluster creation could take several minutes. Press **OK** to continue. Once the cluster is created, the page updates to show both the nodes are added.
++
+## Configure virtual IPs
+
+For Azure consistent services and NFS, you'll also need to define a virtual IP that allows you to connect to a clustered device as opposed to a specific node. A virtual IP is an available IP in the cluster network and any client connecting to the cluster network on the two-node device should be able to access this IP.
++
+### For Azure Consistent Services
+
+For Azure Consistent Services, follow these steps to configure virtual IP.
+
+1. In the local UI on the **Cluster** page, under the **Virtual IP settings** section, select **Azure Consistent Services**.
+
+ <!--![Local web UI "Cluster" page with "Azure Consistent Services" selected for "Virtual IP Settings" on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-1m.png)-->
+
+1. In the **Virtual IP settings** blade, input the following.
+
+ 1. From the dropdown list, select the **Azure Consistent Services network**.
+ 1. Choose IP settings from **DHCP** or **static**.
+ 1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the Azure Consistent Services network that you specified. If you selected DHCP, a virtual IP is automatically picked from the Azure Consistent Services network that you selected.
+1. Select **Apply**.
+
+ ![Local web UI "Cluster" page with "Virtual IP Settings" blade configured for Azure consistent services on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-azure-consistent-services-2m.png)
++
+### For Network File System
+
+For clients connecting via NFS protocol to the two-node device, follow these steps to configure virtual IP.
+
+1. In the local UI on the **Cluster** page, under the **Virtual IP settings** section, select **Network File System**.
+
+ <!--![Local web UI "Cluster" page with "Network File System" selected for "Virtual IP Settings" on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-1m.png)-->
+
+1. In the **Virtual IP settings** blade, input the following.
+
+ 1. From the dropdown list, select the **NFS network**.
+ 1. Choose IP settings from **DHCP** or **Static**.
+ 1. If you chose IP settings as static, enter a virtual IP. This should be a free IP from within the NFS network that you specified. If you selected DHCP, a virtual IP is automatically picked from the NFS network that you selected.
+1. Select **Apply**.
+
+ ![Local web UI "Cluster" page with "Virtual IP Settings" blade configured for NFS on first node](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-network-file-system-2m.png)
+
+> [!NOTE]
+> Virtual IP settings are required. If you do not configure this IP, you will be blocked when configuring the **Device settings** in the next step.
+
+### Configure virtual switches and compute IPs
+
+After the cluster is formed and configured, you'll now create new virtual switches or assign intent to the existing virtual switches that are created based on the selected network topology.
+
+> [!IMPORTANT]
+> On a two-node cluster, compute should only be configured on a virtual switch.
+
+1. In the local UI, go to **Advanced networking** page.
+1. In the **Virtual switch** section, you'll assign compute intent to a virtual switch. You can select an existing virtual switch or select **Add virtual switch** to create a new switch.
+
+ ![Configure compute page in Advanced networking in local UI 1](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/configure-compute-network-1.png)
+
+1. In the **Network settings** blade, if using a new switch, provide the following:
+
+ 1. Provide a name for your virtual switch.
+ 1. Choose the network interface on which the virtual switch should be created.
+ 1. If deploying 5G workloads, set **Supports accelerated networking** to **Yes**.
+ 1. Select the intent to associate with this network interface as **compute**. Alternatively, the switch can be used for management traffic as well. You can't configure storage intent as storage traffic was already configured based on the network topology that you selected earlier.
+
+ > [!TIP]
+ > Use *CTRL + Click* to select more than one intent for your virtual switch.
+
+1. Assign **Kubernetes node IPs**. These static IP addresses are for the Kubernetes VMs.
+
+ For an *n*-node device, a contiguous range of a minimum of *n+1* IPv4 addresses (or more) are provided for the compute VM using the start and end IP addresses. For a 1-node device, provide a minimum of 2 contiguous IPv4 addresses. For a two-node cluster, provide a minimum of 3 contiguous IPv4 addresses.
+
+ > [!IMPORTANT]
+ > - Kubernetes on Azure Stack Edge uses 172.27.0.0/16 subnet for pod and 172.28.0.0/16 subnet for service. Make sure that these are not in use in your network. If these subnets are already in use in your network, you can change these subnets by running the `Set-HcsKubeClusterNetworkInfo` cmdlet from the PowerShell interface of the device. For more information, see [Change Kubernetes pod and service subnets](azure-stack-edge-gpu-connect-powershell-interface.md#change-kubernetes-pod-and-service-subnets).
+ > - DHCP mode is not supported for Kubernetes node IPs. If you plan to deploy IoT Edge/Kubernetes, you must assign static Kubernetes IPs and then enable IoT role. This will ensure that static IPs are assigned to Kubernetes node VMs.
+
+1. Assign **Kubernetes external service IPs**. These are also the load-balancing IP addresses. These contiguous IP addresses are for services that you want to expose outside of the Kubernetes cluster and you specify the static IP range depending on the number of services exposed.
+
+ > [!IMPORTANT]
+ > We strongly recommend that you specify a minimum of 1 IP address for Azure Stack Edge Hub service to access compute modules. You can then optionally specify additional IP addresses for other services/IoT Edge modules (1 per service/module) that need to be accessed from outside the cluster. The service IP addresses can be updated later.
+
+1. Select **Apply**.
+
+ ![Configure compute page in Advanced networking in local UI 2](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-2.png)
+
+1. The configuration takes a couple minutes to apply and you may need to refresh the browser. You can see that the specified virtual switch is created and enabled for compute.
+
+ ![Configure compute page in Advanced networking in local UI 3](./media/azure-stack-edge-gpu-deploy-configure-network-compute-web-proxy/configure-compute-network-3.png)
++
+To delete a virtual switch, under the **Virtual switch** section, select **Delete virtual switch**. When a virtual switch is deleted, the associated virtual networks will also be deleted.
+
+> [!IMPORTANT]
+> Only one virtual switch can be assigned for compute.
+
+### Configure virtual network
+
+You can add or delete virtual networks associated with your virtual switches. To add a virtual switch, follow these steps:
+
+1. In the local UI on the **Advanced networking** page, under the **Virtual network** section, select **Add virtual network**.
+1. In the **Add virtual network** blade, input the following information:
+
+ 1. Select a virtual switch for which you want to create a virtual network.
+ 1. Provide a **Name** for your virtual network.
+ 1. Enter a **VLAN ID** as a unique number in 1-4094 range.
+ 1. Specify the **Subnet mask** and **Gateway** for your virtual LAN network as per the physical network configuration.
+ 1. Select **Apply**.
++
+To delete a virtual network, under the **Virtual network** section, select **Delete virtual network**.
+
+
+## Configure web proxy
+
+This is an optional configuration. However, if you use a web proxy, you can configure it only on this page.
+
+> [!IMPORTANT]
+> * Proxy-auto config (PAC) files are not supported. A PAC file defines how web browsers and other user agents can automatically choose the appropriate proxy server (access method) for fetching a given URL.
+> * Transparent proxies work well with Azure Stack Edge Pro 2. For non-transparent proxies that intercept and read all the traffic (via their own certificates installed on the proxy server), upload the public key of the proxy's certificate as the signing chain on your Azure Stack Edge Pro device. You can then configure the proxy server settings on your Azure Stack Edge device. For more information, see [Bring your own certificates and upload through the local UI](azure-stack-edge-gpu-deploy-configure-certificates.md#bring-your-own-certificates).
+
+1. On the **Web proxy settings** page, take the following steps:
+
+ 1. In the **Web proxy URL** box, enter the URL in this format: `http://host-IP address or FQDN:Port number`. HTTPS URLs arenΓÇÖt supported.
+
+ 2. To validate and apply the configured web proxy settings, select **Apply**.
+
+ ![Screenshot of the Web proxy page in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy/web-proxy-1.png)
+
+2. After the settings are applied, select **Next: Device**.
++
+## Next steps
+
+In this tutorial, you learned about:
+
+> [!div class="checklist"]
+> * Prerequisites
+> * Configure network
+> * Configure advanced networking
+> * Configure web proxy
++
+To learn how to set up your Azure Stack Edge Pro 2 device, see:
+
+> [!div class="nextstepaction"]
+> [Configure device settings](./azure-stack-edge-pro-2-deploy-set-up-device-update-time.md)
databox-online Azure Stack Edge Pro 2 Deploy Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-connect.md
+
+ Title: Tutorial to connect to, configure, activate Azure Stack Edge Pro 2 device
+description: Learn how you can connect to your Azure Stack Edge Pro 2 device by using the local web UI.
++++++ Last updated : 02/25/2022+
+zone_pivot_groups: azure-stack-edge-device-deployment
+# Customer intent: As an IT admin, I need to understand how to connect and activate Azure Stack Edge Pro so I can use it to transfer data to Azure.
+
+# Tutorial: Connect to Azure Stack Edge Pro 2
++
+This tutorial describes how you can connect to your Azure Stack Edge Pro 2 device by using the local web UI.
+
+The connection process can take around 5 minutes to complete.
+++
+This tutorial describes how you can connect to the local web UI on your two-node Azure Stack Edge Pro 2 device.
+
+The connection process can take around 10 minutes to complete.
++
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Connect to a physical device
++
+## Prerequisites
+
+Before you configure and set up your device, make sure that:
+
+* You've installed the physical device as detailed in [Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md).
++
+## Connect to the local web UI setup
++
+1. Configure the Ethernet adapter on your computer to connect to your device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
+
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter. Use the following illustration to identify PORT 1 on your device.
+
+ ![Back plane of a cabled device](./media/azure-stack-edge-pro-2-deploy-install/cabled-backplane-1.png)
+
+ The back plane of the device may look slightly different depending on the exact model youΓÇÖve received. For more information, see [Cable your device](azure-stack-edge-gpu-deploy-install.md#cable-the-device).
++
+3. Open a browser window and access the local web UI of the device at `https://192.168.100.10`.
+ This action may take a few minutes after you've turned on the device.
+
+ You see an error or a warning indicating that thereΓÇÖs a problem with the website's security certificate.
+
+ ![Website security certificate error message](./media/azure-stack-edge-deploy-connect-setup-activate/image2.png)
+
+4. Select **Continue to this webpage**.
+ These steps might vary depending on the browser you're using.
+
+5. Sign in to the web UI of your device. The default password is *Password1*.
+
+ ![Azure Stack Edge device local web UI sign-in page](./media/azure-stack-edge-deploy-connect-setup-activate/image3.png)
+
+6. At the prompt, change the device administrator password.
+ The new password must contain between 8 and 16 characters. It must contain three of the following characters: uppercase, lowercase, numeric, and special characters.
+
+You're now at the **Overview** page of your device. The next step is to configure the network settings for your device.
+++
+1. Configure the Ethernet adapter on your computer to connect to your device with a static IP address of 192.168.100.5 and subnet 255.255.255.0.
+
+2. Connect the computer to PORT 1 on your device. If connecting the computer to the device directly (without a switch), use a crossover cable or a USB Ethernet adapter.
+
+3. Open a browser window and access the local web UI of the device at `https://192.168.100.10`.
+ This action may take a few minutes after you've turned on the device.
+
+ You see an error or a warning indicating that thereΓÇÖs a problem with the website's security certificate.
+
+ ![Website security certificate error message](./media/azure-stack-edge-deploy-connect-setup-activate/image2.png)
+
+4. Select **Continue to this webpage**.
+ These steps might vary depending on the browser you're using.
+
+5. Sign in to the web UI of your device. The default password is *Password1*.
+
+ ![Azure Stack Edge device local web UI sign-in page](./media/azure-stack-edge-deploy-connect-setup-activate/image3.png)
+
+6. At the prompt, change the device administrator password.
+ The new password must contain between 8 and 16 characters. It must contain three of the following characters: uppercase, lowercase, numeric, and special characters. You're now at the **Overview** page of your 2-node device.
+
+7. Repeat the above steps to connect to the second node of your 2-node device.
+
+The next step is to configure the network settings for your device.
++
+## Next steps
+
+In this tutorial, you learned about:
+
+> [!div class="checklist"]
+> * Prerequisites
+> * Connect to a physical device
++
+To learn how to configure network settings on your Azure Stack Edge Pro 2 device, see:
+
+> [!div class="nextstepaction"]
+> [Configure network](./azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md)
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
+
+ Title: Tutorial to install - Unpack, rack, cable Azure Stack Edge Pro 2 physical device | Microsoft Docs
+description: The second tutorial about installing Azure Stack Edge Pro 2 involves how to unpack, rack, and cable the physical device.
++++++ Last updated : 02/28/2022+
+zone_pivot_groups: azure-stack-edge-device-deployment
+# Customer intent: As an IT admin, I need to understand how to install Azure Stack Edge Pro 2 in datacenter so I can use it to transfer data to Azure.
+
+# Tutorial: Install Azure Stack Edge Pro 2
+++
+This tutorial describes how to install an Azure Stack Edge Pro 2 physical device. The installation procedure involves unpacking, rack mounting, and cabling the device.
+
+The installation can take around two hours to complete.
+++
+This tutorial describes how to install a two-node Azure Stack Edge Pro 2 device cluster. The installation procedure involves unpacking, rack mounting, and cabling the device.
+
+The installation can take around 2.5 to 3 hours to complete.
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Unpack the device
+> * Rack mount the device
+> * Cable the device
+
+## Prerequisites
+
+The prerequisites for installing a physical device as follows:
+
+### For the Azure Stack Edge resource
+
+Before you begin, make sure that:
+
+* You've completed all the steps in [Prepare to deploy Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md).
+ * You've created an Azure Stack Edge resource to deploy your device.
+ * You've generated the activation key to activate your device with the Azure Stack Edge resource.
+
+
+### For the Azure Stack Edge Pro 2 physical device
+
+Before you deploy a device:
+
+- Make sure that the device rests safely on a flat, stable, and level work surface.
+- Verify that the site where you intend to set up has:
+ - Standard AC power from an independent source.
+
+ -OR-
+ - A power distribution unit (PDU) with an uninterruptible power supply (UPS).
+ - An available 2U slot on the rack on which you intend to mount the device. If you wish to wall mount your device, you should have a space identified on the wall or a desk where you intend to mount the device.
+
+### For the network in the datacenter
+
+Before you begin:
+
+- Review the networking requirements for deploying Azure Stack Edge Pro 2, and configure the datacenter network per the requirements. For more information, see [Azure Stack Edge Pro 2 networking requirements](azure-stack-edge-system-requirements.md#networking-port-requirements).
+
+- Make sure that the minimum Internet bandwidth is 20 Mbps for optimal functioning of the device.
++
+## Unpack the device
++
+This device is shipped in a single box. Complete the following steps to unpack your device.
+
+1. Place the box on a flat, level surface.
+2. Inspect the box and the packaging foam for crushes, cuts, water damage, or any other obvious damage. If the box or packaging is severely damaged, don't open it. Contact Microsoft Support to help you assess whether the device is in good working order.
+3. Unpack the box. After unpacking the box, make sure that you have:
+ - One single enclosure Azure Stack Edge Pro 2 device.
+ - One power cord.
+ - One packaged bezel.
+ - One packaged mounting accessory which could be:
+ - A 4-post rack slide rail, or
+ - A 2-post rack slide, or
+ - A wall mount (may be packaged separately).
+ - A safety, environmental, and regulatory information booklet.
+++
+This device is shipped in two boxes. Complete the following steps to unpack your device.
+
+1. Place the box on a flat, level surface.
+2. Inspect the box and the packaging foam for crushes, cuts, water damage, or any other obvious damage. If the box or packaging is severely damaged, don't open it. Contact Microsoft Support to help you assess whether the device is in good working order.
+3. Unpack the box. After unpacking the box, make sure that you have the following in each box:
+ - One single enclosure Azure Stack Edge Pro 2 device
+ - One power cord
+ - One packaged bezel
+ - A pair of packaged Wi-Fi antennas in the accessory box
+ - One packaged mounting accessory which could be:
+ - A 4-post rack slide rail, or
+ - A 2-post rack slide, or
+ - A wall mount (may be packaged separately).
+ - A safety, environmental, and regulatory information booklet
++
+
+If you didn't receive all of the items listed here, [Contact Microsoft Support](azure-stack-edge-contact-microsoft-support.md). The next step is to mount your device on a rack or wall.
+
+## Rack mount the device
+
+The device can be mounted using one of the following mounting accessory:
+
+- A 4-post rackmount.
+- A 2-post rackmount.
+- A wallmount.
+
+If you have received 4-post rackmount, use the following procedure to rack mount your device. For other mounting accessories, see [Racking using a 2-post rackmount](azure-stack-edge-pro-2-two-post-rack-mounting.md) or [Mounting the device on the wall](azure-stack-edge-pro-2-wall-mount.md).
+
+If you decide not to mount your device, you can also place it on a desk or a shelf.
+++
+### Prerequisites
+
+- Before you begin, make sure to read the [Safety instructions](azure-stack-edge-pro-2-safety.md) for your device.
+- Begin installing the rails in the allotted space that is closest to the bottom of the rack enclosure.
+- For the rail mounting configuration:
+ - You need to use 10L M5 screws. Make sure that these are included in your rail kit.
+ - You need a Phillips head screwdriver.
+
+### Identify the rail kit contents
+
+Locate the components for installing the rail kit assembly:
+- Inner rails
+- Chassis of your device
+- 10L M5 screws
+
+### Install rails
+
+1. Remove the inner rail.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-remove-inner-rail.png" alt-text="Diagram showing how to remove inner rail.":::
+
+1. Push and slide the middle rail back.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-push-middle-rail.png" alt-text="Diagram showing how to push and slide the middle rail.":::
+
+1. Install the inner rail onto the chassis. **Make sure to fasten the inner rail screw.**
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-install-inner-rail-onto-chassis.png" alt-text="Diagram showing how to install inner rail onto the device chassis using a 4-post rackmount accessory.":::
+
+3. Fix the outer rail and the bracket assembly to the frame. Ensure the latch is fully engaged with the rack post.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-detach-bracket-1.png" alt-text="Diagram showing how to fix the outer rail.":::
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-front-rear-bracket.png" alt-text="Diagram showing the front and rear bracket.":::
++
+4. Insert the chassis to complete the installation.
+
+ 1. Pull the middle rail so that it is fully extended in lock position. Ensure the ball bearing retainer is located at the front of the middle rail (reference diagrams 1 and 2).
+ 1. Insert the chassis into the middle rail (reference diagram 3).
+ 1. Once you hit a stop, pull and push the blue release tab on the inner rails (reference diagram 4).
+ 1. Tighten the M5 screws of the chassis to the rail once the server is seated (reference diagram 5).
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-deploy-install/4-post-insert-chassis-new.png" alt-text="Diagram showing how to insert the chassis.":::
+
+### Install the bezel
+
+After the device is mounted on a rack, install the bezel on the device. Bezel serves as the protective face plate for the device.
+
+1. Locate two fixed pins on the right side of the bezel, and two spring-loaded pins on the left side of the bezel.
+2. Insert the bezel in at an angle with fixed pins going into holes in right rack ear.
+3. Push `[>` shaped latch to the right, move left side of bezel into place, then release the latch until the spring pins engage with holes in left rack ear.
+
+ ![Mount the bezel](./media/azure-stack-edge-pro-2-deploy-install/mount-bezel.png)
+
+4. Lock the bezel in place using the provided security key.
+
+ ![Lock the bezel](./media/azure-stack-edge-pro-2-deploy-install/lock-bezel.png)
++
+If deploying a two-node device cluster, make sure to mount both the devices on the rack or the wall.
++
+
+## Cable the device
+
+Route the cables and then cable your device. The following procedures explain how to cable your Azure Stack Edge Pro 2 device for power and network.
+++
+### Cabling checklist
++
+Before you start cabling your device, you need the following things:
+
+- Your Azure Stack Edge Pro 2 physical device, unpacked, and rack mounted.
+- One power cable (included in the device package).
+- At least one 1-GbE RJ-45 network cable to connect to the Port 1. There are two 1-GbE network interfaces, one used for initial configuration and one for data, on the device. These network interfaces can also act as 10-GbE interfaces.
+- One 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. At least one data network interface from among Port 2, Port 3, and Port 4 needs to be connected to the Internet (with connectivity to Azure). Here is an example QSFP28 DAC connector:
+
+ ![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
+
+ For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products).
+- Access to one power distribution unit.
+- At least one 100-GbE network switch to connect a 10/1-GbE or a 100-GbE network interface to the internet for data.
+- A pair of Wi-Fi antennas (included in the accessory box).
+++
+Before you start cabling your device, you need the following things:
+
+- Your two Azure Stack Edge Pro 2 physical devices, unpacked, and rack mounted.
+-
+- One power cable for each device.
+- Access to one power distribution unit for each device.
+- At least two 1-GbE RJ-45 network cable per device to connect to Port 1 and Port2. There are two 10/1-GbE network interfaces, one used for initial configuration and one for data, on the device.
+- A 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured on each device. The total number needed would depend on the network topology you will deploy. Here is an example QSFP28 DAC connector:
+
+ ![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
+
+ For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products).
+- At least one 100-GbE network switch to connect a 1-GbE or a 100-GbE network interface to the internet for data for each device.
+
+
+
+> [!NOTE]
+> The Azure Stack Edge Pro 2 device should be connected to the datacenter network so that it can ingest data from data source servers.
++
+### Device front panel
+
+The front panel on Azure Stack Edge Pro 2 device:
+
+- The front panel has disk drives and a power button.
+
+ - Has six disk slots in the front of your device.
+ - Slots 0 to Slot 3 contain data disks. Slots 4 and 5 are empty.
+
+ ![Disks and power button on the front plane of a device](./media/azure-stack-edge-pro-2-deploy-install/front-plane-labeled-1.png)
+
+### Device back plane
+
+- The back plane of Azure Stack Edge Pro 2 device has:
+
+ ![Ports on the back plane of a device](./media/azure-stack-edge-pro-2-deploy-install/backplane-ports-1.png)
+
+ - Four network interfaces:
+
+ - Two 1-Gbps interfaces, Port 1 and Port 2, that can also serve as 10-Gbps interfaces.
+ - Two 100-Gbps interfaces, PORT 3 and PORT 4.
+
+ - A baseboard management controller (BMC).
+
+ - One network card corresponding to two high-speed ports and two built-in 10/1-GbE ports:
+
+ - **Intel Ethernet X722 network adapter** - Port 1, Port 2.
+ - **Mellanox dual port 100 GbE ConnectX-6 Dx network adapter** - Port 3, Port 4. See a full list of [Supported cables, switches, and transceivers for ConnectX-6 Dx network adapters](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products).
+
+ - Two Wi-Fi Sub miniature version A (SMA) connectors located on the faceplate of PCIe card slot located below Port 3 and Port 4. The Wi-Fi antennas are installed on these connectors.
+
+
+
+### Power cabling
++
+Follow these steps to cable your device for power:
+
+1. Identify the various ports on the back plane of your device.
+1. Locate the disk slots and the power button on the front of the device.
+1. Connect the power cord to the PSU in the enclosure.
+1. Attach the power cord to the power distribution unit (PDU).
+1. Press the power button to turn on the device.
+++
+Follow these steps to cable your device for power:
+
+1. Identify the various ports on the back plane of each your devices.
+1. Locate the disk slots and the power button on the front of each device.
+1. Connect the power cord to the PSU in each device enclosure.
+1. Attach the power cords from the two devices to two different power distribution units (PDU).
+1. Press the power buttons on the front panels to turn on both the devices.
++
+### Wi-Fi antenna installation
+
+Follow these steps to install Wi-Fi antennas on your device:
+
+1. Locate the two Wi-Fi SMA RF threaded connectors on the back plane of the device. These gold-colored connectors are located on the faceplate of PCIe card slot, right below Port 3 and Port 4.
+
+1. Use a clockwise motion to thread the antennas onto the SMA connectors. Secure them using only your fingers. Do not use a tool or wrench.
+
+ >[!NOTE]
+ > Tighten the connectors sufficiently so that the antenna's rotary joints can turn without causing the threaded connectors to become loose.
+
+1. To position the antennas as desired, articulate the hinge and turn the rotary joint.
++
+### Network cabling
++
+Follow these steps to cable your device for network:
+
+1. Connect the 10/1-GbE network interface Port 1 to the computer that's used to configure the physical device. PORT 1 serves as the management interface for the initial configuration of the device.
+
+ > [!NOTE]
+ > If connecting the computer directly to your device (without going through a switch), use a crossover cable or a USB Ethernet adapter.
+
+1. Connect one or more of Port 2, Port 3, Port 4 to the datacenter network/internet.
+
+ - If connecting Port 2, use the 1-GbE RJ-45 network cable.
+ - For the 100-GbE network interfaces, use the QSFP28 passive direct attached cable (tested in-house).
+
+ The back plane of a cabled device would be as follows:
+
+ ![Back plane of a cabled device](./media/azure-stack-edge-pro-2-deploy-install/cabled-backplane-1.png)
+++
+The two-node device can be configured in the following different ways:
+
+- Without switches
+- Using external switches
+
+Each of these configurations is described in the following sections. For more information on when to use these configurations, see [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md).
+
+#### Switchless
+
+This configuration is used when high speed switches are not available.
+
+Cable your device as shown in the following diagram:
+
+![Diagram showing cabling scheme for Switchless network topology.](./media/azure-stack-edge-pro-2-deploy-install/switchless-initial-1.png)
+
+1. Connect Port 1 on each node to a computer using a crossover cable or a USB Ethernet adapter for the initial configuration of the device.
+1. Connect Port 2 on each node to a 1-GbE switch via a 1-GbE RJ-45 network cable. If available, a 10-GbE switch can also be used.
+1. Connect Port 3 on one device directly (without a switch) to the Port 3 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection.
+1. Connect Port 4 on one device directly (without a switch) to the Port 4 on the other device node. Use a QSFP28 passive direct attached cable (tested in-house) for the connection.
++
+#### Using external switches
+
+This configuration is used for Network Function Manager (NFM) workload deployments and requires 10-GbE high speed switches.
+
+Cable your device as shown in the following diagram:
+
+![Diagram showing cabling scheme when using network topology with external switches.](./media/azure-stack-edge-pro-2-deploy-install/external-switches-initial-1.png)
++
+1. Connect Port 1 on each node to a computer using a crossover cable or a USB Ethernet adapter for the initial configuration of the device.
+1. Connect Port 2 on each node to a 10-GbE high-speed switch via a 10-GbE RJ-45 network cable. A high speed switch must be used.
+1. Port 3 and Port 4 are reserved for NFM workload deployments and must be connected accordingly.
++++
+## Next steps
+
+In this tutorial, you learned how to:
+
+> [!div class="checklist"]
+> * Unpack the device
+> * Rack the device
+> * Cable the device
+
+Advance to the next tutorial to learn how to connect to your device.
+
+> [!div class="nextstepaction"]
+> [Connect Azure Stack Edge Pro 2](./azure-stack-edge-pro-2-deploy-connect.md)
++
databox-online Azure Stack Edge Pro 2 Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-prep.md
+
+ Title: Tutorial to prepare Azure portal, datacenter environment to deploy Azure Stack Edge Pro 2
+description: The first tutorial about deploying Azure Stack Edge Pro 2 involves preparing the Azure portal.
++++++ Last updated : 02/28/2022+
+# Customer intent: As an IT admin, I need to understand how to prepare the portal to deploy Azure Stack Edge Pro 2 so I can use it to transfer data to Azure.
+
+# Tutorial: Prepare to deploy Azure Stack Edge Pro 2
+
+This tutorial is the first in the series of deployment tutorials that are required to completely deploy Azure Stack Edge Pro 2. This tutorial describes how to prepare the Azure portal to deploy an Azure Stack Edge resource.
+
+You need administrator privileges to complete the setup and configuration process. The portal preparation takes less than 10 minutes.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a new resource
+> * Get the activation key
+
+### Get started
+
+For Azure Stack Edge Pro 2 deployment, you need to first prepare your environment. Once the environment is ready, follow the required steps and if needed, optional steps and procedures to fully deploy the device. The step-by-step deployment instructions indicate when you should perform each of these required and optional steps.
+
+| Step | Description |
+| | |
+| **Preparation** |These steps must be completed in preparation for the upcoming deployment. |
+| **[Deployment configuration checklist](#deployment-configuration-checklist)** |Use this checklist to gather and record information before and during the deployment. |
+| **[Deployment prerequisites](#prerequisites)** |These prerequisites validate that the environment is ready for deployment. |
+| | |
+|**Deployment tutorials** |These tutorials are required to deploy your Azure Stack Edge Pro 2 device in production. |
+|**[1. Prepare the Azure portal for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md)** |Create and configure your Azure Stack Edge resource before you install an Azure Stack Box Edge physical device. |
+|**[2. Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md)**|Unpack, rack, and cable the Azure Stack Edge Pro 2 physical device. |
+|**[3. Connect to Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-connect.md)** |Once the device is installed, connect to its local web UI. |
+|**[4. Configure network settings for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md)** |Configure network including the compute network and web proxy settings for your device. |
+|**[5. Configure device settings for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-set-up-device-update-time.md)** |Assign a device name and DNS domain, configure update server and device time. |
+|**[6. Configure security settings for Azure Stack Edge Pro 2](azure-stack-edge-pro-r-security.md)** |Configure certificates for your device. Use device-generated certificates or bring your own certificates. |
+|**[7. Activate Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-activate.md)** |Use the activation key from service to activate the device. The device is ready to set up SMB or NFS shares or connect via REST. |
+|**[8. Configure compute](azure-stack-edge-gpu-deploy-configure-compute.md)** |Configure the compute role on your device. A Kubernetes cluster is also created. |
+|**[9A. Transfer data with Edge shares](./azure-stack-edge-gpu-deploy-add-shares.md)** |Add shares and connect to shares via SMB or NFS. |
+|**[9B. Transfer data with Edge storage accounts](./azure-stack-edge-gpu-deploy-add-storage-accounts.md)** |Add storage accounts and connect to blob storage via REST APIs. |
+
+You can now begin to gather information regarding the software configuration for your Azure Stack Edge Pro 2 device.
+
+## Deployment configuration checklist
+
+Before you deploy your device, you need to collect information to configure the software on your Azure Stack Edge Pro 2 device. Preparing some of this information ahead of time helps streamline the process of deploying the device in your environment. Use the [Azure Stack Edge Pro 2 deployment configuration checklist](azure-stack-edge-pro-2-deploy-checklist.md) to note down the configuration details as you deploy your device.
++
+## Prerequisites
+
+Following are the configuration prerequisites for your Azure Stack Edge resource, your Azure Stack Edge Pro 2 device, and the datacenter network.
+
+### For the Azure Stack Edge resource
++
+### For the Azure Stack Edge Pro 2 device
+
+Before you begin, make sure that:
+
+- You've reviewed the safety information for this device at: [Safety guidelines for your Azure Stack Edge device](azure-stack-edge-pro-2-safety.md).
+- You have a 2U slot available in a standard 19" rack in your datacenter if you plan to mount the device on a rack.
+- You have access to a flat, stable, and level work surface where the device can rest safely.
+- The site where you intend to set up the device has standard AC power from an independent source or a rack power distribution unit (PDU) with an uninterruptible power supply (UPS).
+- You have access to a physical device.
++
+### For the datacenter network
+
+Before you begin, make sure that:
+
+- The network in your datacenter is configured per the networking requirements for your Azure Stack device. For more information, see [Azure Stack Edge Pro 2 System Requirements](azure-stack-edge-gpu-system-requirements.md).
+
+- For normal operating conditions of your Azure Stack Edge, you have:
+
+ - A minimum of 10-Mbps download bandwidth to ensure the device stays updated.
+ - A minimum of 20-Mbps dedicated upload and download bandwidth to transfer files.
+
+## Create a new resource
+
+If you have an existing Azure Stack Edge resource to manage your physical device, skip this step and go to [Get the activation key](#get-the-activation-key).
+
+### Create an order
+
+You can use the Azure Edge Hardware Center to explore and order a variety of hardware from the Azure hybrid portfolio including Azure Stack Edge Pro 2 devices.
+
+When you place an order through the Azure Edge Hardware Center, you can order multiple devices, to be shipped to more than one address, and you can reuse ship to addresses from other orders.
+
+Ordering through Azure Edge Hardware Center will create an Azure resource that will contain all your order-related information. One resource each will be created for each of the units ordered. YouΓÇÖll have to create an Azure Stack Edge resource after you receive the device to activate and manage it.
++
+#### Create a management resource for each device
+++
+## Get the activation key
+
+After the Azure Stack Edge resource is up and running, you'll need to get the activation key. This key is used to activate and connect your Azure Stack Edge Pro 2 device with the resource. You can get this key now while you are in the Azure portal.
+
+1. Select the resource you created, and select **Overview**.
+
+2. In the right pane, enter a name for the Azure Key Vault or accept the default name. The key vault name can be between 3 and 24 characters.
+
+ A key vault is created for each Azure Stack Edge resource that is activated with your device. The key vault lets you store and access secrets, for example, the Channel Integrity Key (CIK) for the service is stored in the key vault.
+
+ Once you've specified a key vault name, select **Generate key** to create an activation key.
+
+ ![Screenshot of the Overview pane for a newly created Azure Stack Edge resource. The Generate Activation Key button is highlighted.](media/azure-stack-edge-gpu-deploy-prep/azure-stack-edge-resource-3.png)
+
+ Wait a few minutes while the key vault and activation key are created. Select the copy icon to copy the key and save it for later use.
++
+> [!IMPORTANT]
+> - The activation key expires three days after it is generated.
+> - If the key has expired, generate a new key. The older key is not valid.
+
+## Next steps
+
+In this tutorial, you learned about Azure Stack Edge Pro 2 articles such as:
+
+> [!div class="checklist"]
+> * Create a new resource
+> * Get the activation key
+
+Advance to the next tutorial to learn how to install Azure Stack Edge Pro 2.
+
+> [!div class="nextstepaction"]
+> [Install Azure Stack Edge Pro 2](./azure-stack-edge-pro-2-deploy-install.md)
databox-online Azure Stack Edge Pro 2 Deploy Set Up Device Update Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-set-up-device-update-time.md
+
+ Title: Tutorial to connect, configure, activate Azure Stack Edge Pro 2 device
+description: Tutorial to deploy Azure Stack Edge Pro 2 instructs you to configure device settings including device name, update server, and time server via the local web UI
++++++ Last updated : 03/01/2022+
+# Customer intent: As an IT admin, I need to understand how to set up device name, update server and time server via the local web UI of Azure Stack Edge Pro 2 so I can use the device to transfer data to Azure.
+
+# Tutorial: Configure the device settings for Azure Stack Edge Pro 2
+
+This tutorial describes how you configure device-related settings for your Azure Stack Edge Pro 2 device. You can set up your device name, update server, and time server via the local web UI.
++
+The device settings can take around 5-7 minutes to complete.
+
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Configure device settings
+> * Configure update
+> * Configure time
+
+## Prerequisites
++
+Before you configure device-related settings on your Azure Stack Edge Pro 2, make sure that:
+
+* For your physical device:
+
+ - You've installed the physical device as detailed in [Install Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-install.md).
+ - You've configured network and enabled and configured compute network on your device as detailed in [Tutorial: Configure network for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-configure-network-compute-web-proxy.md).
++
+## Configure device settings
+
+Follow these steps to configure device-related settings:
+
+1. On the **Device** page of the local web UI of your device, take the following steps:
+
+ 1. Enter a friendly name for your device. The friendly name must contain from 1 to 13 characters and can have letter, numbers, and hyphens.
+
+ 2. Provide a **DNS domain** for your device. This domain is used to set up the device as a file server.
+
+ 3. To validate and apply the configured device settings, select **Apply**.
+
+ ![Screenshot of the Device page in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/device-1.png)
+
+ If youΓÇÖve changed the device name and the DNS domain, the automatically generated self-signed certificates on the device wonΓÇÖt work. You'll see a warning to this effect.
+
+
+ ![Screenshot of the Warning in the Device page of local web UI of an Azure Stack Edge device. The OK button is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/device-2.png)
+
+ 4. When the device name and the DNS domain are changed, the SMB endpoint is created.
+
+ 5. After the settings are applied, select **Next: Update server**.
+
+ ![Screenshot of the Device page in the local web UI of an Azure Stack Edge device. The SMB server and Next: Update server > is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/device-3.png)
+
+## Configure update server
+
+1. On the **Update server** page of the local web UI of your device, you can now configure the location from where to download the updates for your device.
+
+ - You can get the updates directly from the **Microsoft Update server**.
+
+ ![Screenshot of the Update server page with Microsoft update server configured in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/update-1.png)
+
+ You can also choose to deploy updates from the **Windows Server Update services** (WSUS). Provide the path to the WSUS server.
+
+ ![Screenshot of the Update server page with Windows Server Update Services configured in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/update-2.png)
+
+ > [!NOTE]
+ > If a separate Windows Update server is configured and if you choose to connect over *https* (instead of *http*), then signing chain certificates required to connect to the update server are needed. For information on how to create and upload certificates, go to [Manage certificates](azure-stack-edge-gpu-manage-certificates.md).
+
+2. Select **Apply**.
+3. After the update server is configured, select **Next: Time**.
+
+
+## Configure time
+
+Follow these steps to configure time settings on your device.
+
+> [!IMPORTANT]
+> Though the time settings are optional, we strongly recommend that you configure a primary NTP and a secondary NTP server on the local network for your device. If a local server is not available, you can configure a public NTP server.
+
+NTP servers are required because your device must synchronize time so that it can authenticate with your cloud service providers.
+
+1. On the **Time** page of the local web UI of your device, you can select the time zone, and the primary and secondary NTP servers for your device.
+
+ 1. In the **Time zone** drop-down list, select the time zone that corresponds to the geographic location in which the device is being deployed.
+ The default time zone for your device is PST. Your device will use this time zone for all scheduled operations.
+
+ 2. In the **Primary NTP server** box, enter the primary server for your device or accept the default value of time.windows.com.
+ Ensure that your network allows NTP traffic to pass from your datacenter to the internet.
+
+ 3. Optionally, in the **Secondary NTP server** box, enter a secondary server for your device.
+
+ 4. To validate and apply the configured time settings, select **Apply**.
+
+ ![Screenshot of the Time page in the local web UI of an Azure Stack Edge device. The Apply button is highlighted.](./media/azure-stack-edge-pro-2-deploy-set-up-device-update-time/time-1.png)
+
+2. After the settings are applied, select **Next: Certificates**.
++
+## Next steps
+
+In this tutorial, you learn about:
+
+> [!div class="checklist"]
+>
+> * Prerequisites
+> * Configure device settings
+> * Configure update
+> * Configure time
+
+To learn how to configure certificates for your Azure Stack Edge Pro 2 device, see:
+
+> [!div class="nextstepaction"]
+> [Configure certificates](./azure-stack-edge-pro-2-deploy-configure-certificates.md)
databox-online Azure Stack Edge Pro 2 Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-limits.md
+
+ Title: Azure Stack Edge Pro 2 limits for device and service
+description: Learn about limits and recommended sizes as you deploy and operate Azure Stack Edge Pro 2, including service limits, device limits, and storage limits.
++++++ Last updated : 02/09/2022+++
+# Azure Stack Edge Pro 2 limits
+
+Consider these limits as you deploy and operate your Microsoft Azure Stack Edge Pro 2 solution.
+
+## Azure Stack Edge service limits
++
+## Azure Stack Edge Pro 2 device limits
+
+The following table describes the limits for the Azure Stack Edge Pro 2 device.
+
+| Description | Value |
+|||
+|No. of files per device |100 million |
+|No. of shares per container |1 |
+|Maximum no. of share endpoints and REST endpoints per device (GPU devices only)| 24 |
+|Maximum no. of tiered storage accounts per device (GPU devices only)| 24|
+|Maximum file size written to a share| 5 TB |
+|Maximum number of resource groups per device| 800 |
+
+## Azure storage limits
++
+## Data upload caveats
++
+## Azure storage account size limits
+++
+## Azure object size limits
++
+## Next steps
+
+- [Prepare to deploy Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md)
+
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
+
+ Title: Microsoft Azure Stack Edge Pro 2 overview | Microsoft Docs
+description: Describes Azure Stack Edge Pro 2, a storage solution that uses a physical device for network-based transfer into Azure.
++++++ Last updated : 03/03/2022+
+#Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro GPU is and how it works so I can use it to process and transform data before sending to Azure.
+
+# What is Azure Stack Edge Pro 2?
+
+Azure Stack Edge Pro 2 is a new generation of an AI-enabled edge computing device offered as a service from Microsoft. This article provides you an overview of the Azure Stack Edge Pro 2 solution. The overview also details the benefits, key capabilities, and the scenarios where you can deploy this device.
+
+The Azure Stack Edge Pro 2 offers the following benefits over its precursor, the Azure Stack Edge Pro series:
+
+- This series offers multiple models that closely align with your compute, storage, and memory needs. Depending on the model you choose, the compute acceleration could be via one or two Graphical Processing Units (GPU) on the device.
+- This series has flexible form factors with multiple mounting options. These devices can be rack mounted, mounted on a wall, or even placed on a shelf in your office.
+- These devices have low acoustic emissions and meet the requirements for noise levels in an office environment.
++
+## Use cases
+
+The Pro 2 series is designed for deployment in edge locations such as retail, telecommunications, manufacturing, or even healthcare. Here are the various scenarios where Azure Stack Edge Pro 2 can be used for rapid Machine Learning (ML) inferencing at the edge and preprocessing data before sending it to Azure.
++
+## Key capabilities
+
+Azure Stack Edge Pro 2 has the following capabilities:
+
+|Capability |Description |
+|||
+|Accelerated AI inferencing| Enabled by the compute acceleration card. Depending on your compute needs, you may choose a model that comes with or without Graphical Processing Units (GPUs). <br> For more information, see [GPU sharing on your Azure Stack Edge device](azure-stack-edge-gpu-sharing.md).|
+|Edge computing |Supports VM and containerized workloads to allow analysis, processing, and filtering of data. <br>For information on VM workloads, see [VM overview on Azure Stack Edge](azure-stack-edge-gpu-virtual-machine-overview.md).<br>For containerized workloads, see [Kubernetes overview on Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)</li></ul> |
+|Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.|
+|Cloud-managed |Device and service are managed via the Azure portal.|
+|Offline upload | Disconnected mode supports offline upload scenarios.|
+|Supported file transfer protocols | Support for standard Server Message Block (SMB), Network File System (NFS), and Representational state transfer (REST) protocols for data ingestion. <br> For more information on supported versions, see [Azure Stack Edge Pro 2 system requirements](azure-stack-edge-placeholder.md).|
+|Data refresh | Ability to refresh local files with the latest from cloud. <br> For more information, see [Refresh a share on your Azure Stack Edge](azure-stack-edge-gpu-manage-shares.md#refresh-shares).|
+|Encryption | BitLocker support to locally encrypt data and secure data transfer to cloud over *https*.|
+|Bandwidth throttling| Throttle to limit bandwidth usage during peak hours. <br> For more information, see [Manage bandwidth schedules on your Azure Stack Edge](azure-stack-edge-gpu-manage-bandwidth-schedules.md).|
+|Easy ordering| Bulk ordering and tracking of the device via Azure Edge Hardware Center. <br> For more information, see [Order a device via Azure Edge Hardware Center](azure-stack-edge-pro-2-deploy-prep.md#create-a-new-resource).|
+|Specialized network functions|Use the Marketplace experience from Azure Network Function Manager to rapidly deploy network functions. The functions deployed on Azure Stack Edge include mobile packet core, SD-WAN edge, and VPN services. <br>For more information, see [What is Azure Network Function Manager? (Preview)](../network-function-manager/overview.md).|
+|Scale out file server|The device is available as a single node or a two-node cluster. For more information, see [What is clustering on Azure Stack Edge devices? (Preview)](azure-stack-edge-placeholder.md).|
+
+<!--|ExpressRoute | Added security through ExpressRoute. Use peering configuration where traffic from local devices to the cloud storage endpoints travels over the ExpressRoute. For more information, see [ExpressRoute overview](../expressroute/expressroute-introduction.md).|-->
++
+## Components
+
+The Azure Stack Edge Pro 2 solution consists of Azure Stack Edge resource, Azure Stack Edge Pro 2 physical device, and a local web UI.
+
+* **Azure Stack Edge Pro 2 physical device** - A 2U compact size device supplied by Microsoft that can be configured to send data to Azure.
+
+ ![Perspective view of Azure Stack Edge Pro 2 physical device](./media/azure-stack-edge-pro-2-overview/azure-stack-edge-pro-2-perspective-view-1.png)
+
+ [!INCLUDE [azure-stack-edge-gateway-edge-hardware-center-overview](../../includes/azure-stack-edge-gateway-edge-hardware-center-overview.md)]
+
+ For more information, go to [Create an order for your Azure Stack Edge Pro 2 device](azure-stack-edge-gpu-deploy-prep.md#create-a-new-resource).
+
+* **Azure Stack Edge resource** - A resource in the Azure portal that lets you manage an Azure Stack Edge Pro 2 device from a web interface that you can access from different geographical locations. Use the Azure Stack Edge resource to create and manage resources, view, and manage devices and alerts, and manage shares.
+
+
+* **Azure Stack Edge Pro 2 local web UI** - A browser-based local user interface on your Azure Stack Edge Pro 2 device primarily intended for the initial configuration of the device. Use the local web UI also to run diagnostics, shut down and restart the device, or view copy logs.
+
+ [!INCLUDE [azure-stack-edge-gateway-local-web-ui-languages](../../includes/azure-stack-edge-gateway-local-web-ui-languages.md)]
+
+ For information about using the web-based UI, go to [Use the web-based UI to administer your Azure Stack Edge](azure-stack-edge-manage-access-power-connectivity-mode.md).
+
+## Region availability
+
+The Azure Stack Edge Pro GPU physical device, Azure resource, and target storage account to which you transfer data donΓÇÖt all have to be in the same region.
+
+- **Resource availability** - For this release, the resource is available in East US, West EU, and South East Asia regions.
+
+- **Device availability** - You should be able to see Azure Stack Edge Pro 2 as one of the available SKUs when placing the order.
+
+ For a list of all the countries/regions where the Azure Stack Edge Pro GPU device is available, go to **Availability** section in the **Azure Stack Edge Pro** tab for [Azure Stack Edge Pro GPU pricing](https://azure.microsoft.com/pricing/details/azure-stack/edge/#azureStackEdgePro).
+
+- **Destination Storage accounts** - The storage accounts that store the data are available in all Azure regions. The regions where the storage accounts store Azure Stack Edge Pro GPU data should be located close to where the device is located for optimum performance. A storage account located far from the device results in long latencies and slower performance.
+
+Azure Stack Edge service is a non-regional service. For more information, see [Regions and Availability Zones in Azure](../availability-zones/az-overview.md). Azure Stack Edge service doesnΓÇÖt have dependency on a specific Azure region, making it resilient to zone-wide outages and region-wide outages.
+
+To understand how to choose a region for the Azure Stack Edge service, device, and data storage, see [Choosing a region for Azure Stack Edge](azure-stack-edge-gpu-regions.md).
+
+## Billing and pricing
+
+These devices can be ordered via the Azure Edge Hardware center. These devices are billed as a monthly service through the Azure portal. For more information, see [Azure Stack Edge Pro 2 pricing](azure-stack-edge-placeholder.md).
+
+## Next steps
+
+- Review the [Azure Stack Edge Pro 2 system requirements](azure-stack-edge-placeholder.md).
+
+- Understand the [Azure Stack Edge Pro 2 limits](azure-stack-edge-placeholder.md).
+
+- Deploy [Azure Stack Edge Pro 2](azure-stack-edge-placeholder.md) in Azure portal.
databox-online Azure Stack Edge Pro 2 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-safety.md
+
+ Title: Safety instructions for Azure Stack Edge Pro 2 device | Microsoft Docs
+description: Describes safety conventions, guidelines, considerations, and explains how to safely install and operate your Azure Stack Edge Pro 2 device.
++++++ Last updated : 03/02/2022+++
+# Safety instructions for your Azure Stack Edge Pro 2
+++
+To reduce the risk of bodily injury, electrical shock, fire, and equipment damage, read the following safety instructions and observe all warnings and precautions in this article before unpacking, installing, or maintaining this device.
++
+### Installation and handling precautions
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**DANGER:**
+* Before you begin to unpack the equipment, to prevent hazardous situations resulting in death, serious injury and/or property damage, read, and follow all warnings and instructions.
+* Inspect the as-received equipment for damages. If the equipment enclosure is damaged, [contact Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT) to obtain a replacement. DonΓÇÖt attempt to operate the device.
++
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**CAUTION:**
+* If you suspect the device is malfunctioning, [contact Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT) to obtain a replacement. DonΓÇÖt attempt to service the equipment.
+* Always wear the appropriate clothing to protect skin from sharp metal edges and avoid sliding any metal edges against skin. Always wear appropriate eye protection to avoid injury from objects that may become airborne.
+* Laser peripherals or devices may be present. To avoid risk or radiation exposure and/or personal injury, donΓÇÖt open the enclosure of any laser peripheral or device. Laser peripherals or devices arenΓÇÖt serviceable. Only use certified and rated Laser Class I for optical transceiver products.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Tip hazard](./media/azure-stack-edge-pro-2-safety/icon-safety-tip-hazard.png)**WARNING:**
+* When installing into an equipment rack, the rack must be anchored to an unmovable support to prevent it from tipping before the rack-mounted equipment is installed or extended from it. The equipment rack must be installed according to the rack manufacturer's instructions.
+* When using an equipment rack, the rack may tip over causing serious personal injury. Verify the equipment rack is anchored to the floor and/or bayed to its adjacent equipment racks before installing, extending, or removing equipment. Failure to do so could allow the rack system to tip over leading to death, injury, or damage.
+* When installed into an equipment rack, donΓÇÖt extend more than one equipment (for example, storage or server) from the rack at one time to prevent the equipment rack from becoming dangerously unstable.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Overload tip hazard 2](./media/azure-stack-edge-pro-2-safety/icon-overload-tip-hazard.jpg)![Tip hazard 4](./media/azure-stack-edge-pro-2-safety/icon-safety-tip-hazard.png) **WARNING:**
+* This equipment is not to be used as shelves or work spaces. Do not place objects on top of the equipment. Adding any type of load to a rack or wall mounted equipment can create a potential tip or crush hazard which could lead to injury, death, or product damage.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard icon](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)![Do not access](./media/azure-stack-edge-pro-2-safety/icon-safety-do-not-access.png)**CAUTION:**
+* Parts enclosed within panels containing this symbol ![Do not access 2](./media/azure-stack-edge-pro-2-safety/icon-safety-do-not-access-tiny.png) contain no user-serviceable parts. Hazardous voltage, current, and energy levels are present inside. DonΓÇÖt open. Return to manufacturer for servicing. </br>Open a ticket with [Microsoft Support](https://aka.ms/CONTACT-ASE-SUPPORT).
+* The equipment contains coin cell batteries. ThereΓÇÖs a risk of explosion if the battery is replaced by an incorrect type. Dispose of used batteries according to the instructions.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Hot component surface](./media/azure-stack-edge-pro-2-safety/icon-hot-component-surface.png)**CAUTION:**
+* If the equipment has been running, any installed component, processor(s), and heat sink(s) may be hot. Allow the equipment to cool before opening the cover to avoid the possibility of coming into contact with hot component(s). Ensure that youΓÇÖre wearing proper personal protective equipment (PPE) with suitable thermal insulation when hot-swapping any components.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Moving parts hazard](./media/azure-stack-edge-pro-2-safety/icon-moving-parts-hazard.png)**CAUTION:**
+* CAUTION: Avoid wearing loose clothing items, jewelry, or loose long hair when working near an actively spinning fan.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard icon 3](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)**WARNING:**
+* The system is designed to operate in a controlled environment. Choose a site that is:
+ * Indoors, not exposed to moisture or rain.
+ * Well ventilated and away from sources of heat including direct sunlight and radiators.
+ * Located in a space that minimizes vibration and physical shock.
+ * Isolated from strong electromagnetic fields produced by electrical devices.
+ * Provided with properly grounded outlets.
+ * Provided with sufficient space to access the power supply cord, because it serves as the product's main power disconnect.
+* To reduce the risk of fire or electric shock, install the equipment/system in a temperature-controlled indoor area free of conductive contaminants. DonΓÇÖt place the equipment near liquids or in an excessively humid environment.
+* DonΓÇÖt allow any liquid or any foreign object to enter the device. DonΓÇÖt place beverages or any other liquid containers on or near the device.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**CAUTION:**
+* Elevated operating ambient - If installed in a closed or multi-unit rack assembly, the operating ambient temperature of the rack environment may be greater than room ambient. Therefore, consideration should be given to installing the equipment in an environment compatible with the maximum ambient temperature (Tma is 45┬░C) specified by the manufacturer.
+* Reduced air flow - Installation of the equipment in a rack should be such that the amount of air flow required for safe operation of the equipment isnΓÇÖt compromised. Carefully route cables as directed to minimize airflow blockage and cooling problems.
+* DonΓÇÖt use equipment if rails require excessive force when sliding the inner drawer assembly.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)**WARNING:**
+* This equipment has only been certified for use with mounting accessories provided with the equipment. The use of any other mounting device that hasnΓÇÖt been certified for use with this equipment may cause severe injuries.
+* When provided with the equipment, carefully follow all instructions provided with the Wall Mount Equipment Bracket or the Slide Rail Kits. Failure to install these accessories properly can cause severe injuries.
+* The two and four post Slide Rail Kits are only compatible with the rack specifications in Electronic Industries Association (EIA) standard EIA-310-D. Choosing a rack that doesnΓÇÖt comply with the EIA-310-D specifications can cause hazards that can lead to severe injuries.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Pinching hazard](./media/azure-stack-edge-pro-2-safety/icon-pinching-points.png)**CAUTION:**
+* DonΓÇÖt place fingers on the bearing tracks during slide rails installation (read slide rails installation instructions). Sliding of rails over bearings can pose a risk of pinching.
+
+### Electrical precautions
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)**WARNING:**
+* Hazardous voltage, current, or energy levels are present inside this equipment and any component displaying this symbol: :::image type="content" source="media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock-tiny.png" alt-text="Electric shock hazard description":::
+DonΓÇÖt service the equipment until all input power is removed, unless directed otherwise by the service instructions in an accompanying document for the component being serviced. To remove all input power, the equipment power cable must be disconnected from the AC electrical mains supply. DonΓÇÖt remove cover or barrier on any component that contains this label: :::image type="content" source="media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock-tiny.png" alt-text="Electric shock hazard description 2":::
+Servicing should only be performed by qualified trained technicians.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard 3](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)**WARNING:**
+* DonΓÇÖt install equipment into a rack or on a wall while theyΓÇÖre energized with external cables.
+* Ensure power cords arenΓÇÖt crushed or damaged during installation.
+* Provide a safe electrical earth connection to the power supply cord. The AC cord has a three-wire grounding plug (a plug that has a grounding contact). This plug fits only a grounded AC outlet. DonΓÇÖt defeat the purpose of the grounding contact.
+* Given that the plug on the power supply cord is the main disconnect device, ensure that the socket outlets are located near the equipment and are easily accessible.
+* Unplug the power cord (by pulling the plug, not the cord) and disconnect all cables if any of the following conditions exist:
+ * The power cord or plug becomes frayed or otherwise damaged
+ * You spill something into the device casing
+ * The device is exposed to rain, excess moisture, or other liquids. The device has been dropped and the device casing is damaged
+ * You suspect the device needs service or repair
+* Permanently unplug the unit before you move it or if you think it has become damaged in any way.
+* Provide a suitable power source with electrical overload protection to meet the power specifications shown on the equipment rating label provided with the equipment.
+* DonΓÇÖt attempt to modify or use AC power cord(s) other than the ones provided with the equipment.
+
+![Safety warning](./media/azure-stack-edge-pro-2-safety/icon-safety-warning.png)![Electric shock hazard 2](./media/azure-stack-edge-pro-2-safety/icon-safety-electric-shock.png)![Moving parts hazard 2](./media/azure-stack-edge-pro-2-safety/icon-moving-parts-hazard.png)**WARNING:**
+* To reduce the risk of electrical shock, injury from moving parts, damage, or loss of data, always make sure to disconnect the equipment from the AC electrical source when working inside the equipment. Powering down the system doesnΓÇÖt ensure thereΓÇÖs no electrical activity inside the equipment.
++
+### Electrostatic precautions
+
+![Safety notice](./media/azure-stack-edge-pro-2-safety/icon-safety-notice.png)**NOTICE:**
+* Electrostatic discharge (ESD) and ESD protection: ESD can damage drives, boards, and other parts. We recommend that you perform all procedures in this chapter only at an ESD work- station. If one isnΓÇÖt available, provide some ESD protection by wearing an antistatic wrist strap attached to chassis ground any unpainted metal surface on the equipment when handling parts.
+* ESD and handling boards: Always handle boards carefully. They can be extremely sensitive to electrostatic discharge (ESD). Hold boards only by their edges. After removing a board from its protective wrapper or from the equipment, place the board component side up on a grounded, static-free surface. Use a conductive foam pad if available but not the board wrapper. DonΓÇÖt slide board over any surface.
+* Wear a grounded wrist strap. If none are available, discharge any personal static electricity by touching the bare metal chassis of the server, or the bare metal body of any other grounded device.
+* Humid environments tend to have less static electricity than dry environments. A grounding strap is warranted whenever danger of static electricity exists.
+
+![Safety notice](./media/azure-stack-edge-pro-2-safety/icon-safety-notice.png)**NOTICE:**
+* Leave all replacement components inside their static-proof packaging until youΓÇÖre ready to use them.
++
+## Regulatory information
+
+Regulatory model numbers: DB040 and DB040-W
+
+This equipment is designed for use with NRTL Listed (UL, CSA, ETL, etc.), and IEC/EN 60950-1 or IEC/EN 62368-1 compliant (CE marked) Information Technology equipment.
+
+This equipment is designed to operate in the following environment:
+
+* Temperature specifications
+ * Storage: ΓÇô40┬░C to 70┬░C (ΓÇô40┬░F to 149┬░F)
+ * Operating: 10┬░C to 45┬░C (50┬░F to 113┬░F)
+* Relative humidity specifications
+ * Storage: 5% to 95% relative humidity
+ * Operating: 5% to 85% relative humidity
+* Maximum altitude specifications
+ * Operating: 3,050 meters (10,000 feet)
+ * Storage: 9,150 meters (30,000 feet)
+
+For electrical supply ratings, refer to the equipment rating label provided with the unit.
++
+### USA and Canada
+SupplierΓÇÖs Declaration of Conformity
+
+Models: DB040, DB040-W
++
+This device complies with part 15 of the FCC Rules and Industry Canada license-exempt RSS standard(s). Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation of the device.
+
+Any changes or modifications not expressly approved by the party responsible for compliance could void the user's authority to operate this equipment.
+++
+CAN ICES-3(A)/NMB-3(A)
+
+Microsoft Corporation, One Microsoft Way, Redmond, WA 98052, USA.
+
+United States: (800) 426-9400
+
+Canada: (800) 933-4750
++
+**For model: DB040-W only**
+
+Operation in the band 5150ΓÇô5250 MHz is only for indoor use to reduce the potential for harmful interference to co-channel mobile satellite systems. Users are advised that high-power radars are allocated as primary users (priority users) of the bands 5250ΓÇô5350 MHz and 5650ΓÇô5850 MHz and these radars could cause interference and/or damage to LE-LAN devices.
++
+Exposure to Radio Frequency (RF) Energy
+
+This equipment should be installed and operated with a minimum distance of 20 cm (8 inches) between the radiator and your body. This transmitter must not be colocated or operating with any other antenna or transmitter.
+
+This equipment complies with FCC/ISED radiation exposure limits set forth for an uncontrolled environment. Additional information about radiofrequency safety can be found on the FCC website at https://www.fcc.gov/general/radio-frequency-safety-0 and the Industry Canada website at http://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/sf01904.html
+
+**Detachable antenna usage**
+This radio transmitter [IC: 7542A-MT7921] has been approved by Innovation, Science and Economic Development Canada to operate with the antenna types listed below, with the maximum permissible gain indicated. Antenna types not included in this list that have a gain greater than the maximum gain indicated for any type listed are strictly prohibited for use with this device.
+++
+### European Union
+
+* This device is a class A product. In a domestic environment, this product may cause radio interference in which case the user may be required to take adequate measures.
++
+For Model: DB040-W only
+
+Hereby, declares that this device is in compliance with EU Directive 2014/53/EU and UK Radio Equipment Regulations 2017 (S.I. 2017/1206). The full text of the EU and UK declaration of conformity are available on the [product webpage](https://azure.microsoft.com/products/azure-stack/edge/#overview).
+
+This device may operate in all member states of the EU. Observe national and local regulations where the device is used. This device is restricted to indoor use only when operating in the 5150 - 5350 MHz frequency range in the following countries:
++
+In accordance with Article 10.8(a) and 10.8(b) of the Radio Equipment Directive (RED), the following table provides information on the frequency bands used and the maximum RF transmit power of the product for sale in the EU:
++
+|Frequency band (MHz) |Maximum EIRP (dBm) |
+|||
+|2400 - 2483.5 |19.74 |
+|5150 - 5350 |22.56 |
+|5470 - 5725 | 19.68 |
+|5725 - 5875 |13.83 |
+++
+### Japan
++
+## Declarations of conformity
+
+A Declaration of Conformity (DoC) is a document stating that a product meets the legal standards to which it must adhere, such as safety regulations. Here is the declaration of conformity for EU:
+
+![Screenshot of the Declaration of conformity for EU.](./media/azure-stack-edge-pro-2-safety/declaration-of-conformity-eu.png)
+
+Here is the declaration of conformity for UK:
+
+![Screenshot of the Declaration of conformity for UK.](./media/azure-stack-edge-pro-2-safety/declaration-of-conformity-uk.png)
+
+## Next steps
+
+* [Prepare to deploy Azure Stack Edge Pro 2 device](azure-stack-edge-pro-2-deploy-prep.md)
databox-online Azure Stack Edge Pro 2 System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-system-requirements.md
+
+ Title: Azure Stack Edge Pro 2 system requirements
+description: Learn about the system requirements for your Azure Stack Edge Pro 2 solution and for the clients connecting to Azure Stack Edge Pro 2.
++++++ Last updated : 02/09/2022+++
+# System requirements for Azure Stack Edge Pro 2
+
+This article describes the important system requirements for your Azure Stack Edge Pro 2 solution and for the clients connecting to Azure Stack Edge Pro 2 device. We recommend that you review the information carefully before you deploy your Azure Stack Edge Pro 2. You can refer back to this information as necessary during the deployment and subsequent operation.
+
+The system requirements for the Azure Stack Edge Pro 2 include:
+
+- **Software requirements for hosts** - describes the supported platforms, browsers for the local configuration UI, SMB clients, and any additional requirements for the clients that access the device.
+- **Networking requirements for the device** - provides information about any networking requirements for the operation of the physical device.
+
+## Supported OS for clients connected to device
++
+## Supported protocols for clients accessing device
++
+## Supported Azure Storage accounts
++
+## Supported Edge storage accounts
+
+The following Edge storage accounts are supported with REST interface of the device. The Edge storage accounts are created on the device. For more information, see [Edge storage accounts](azure-stack-edge-gpu-manage-storage-accounts.md#about-edge-storage-accounts).
+
+|Type |Storage account |Comments |
+||||
+|Standard |GPv1: Block Blob | |
+
+*Page blobs and Azure Files are currently not supported.
+
+## Supported local Azure Resource Manager storage accounts
+
+These storage accounts are created via the device local APIs when you are connecting to local Azure Resource Manager. The following storage accounts are supported:
+
+|Type |Storage account |Comments |
+||||
+|Standard |GPv1: Block Blob, Page Blob | SKU type is Standard_LRS |
+|Premium |GPv1: Block Blob, Page Blob | SKU type is Premium_LRS |
++
+## Supported storage types
+++
+## Supported browsers for local web UI
++
+## Networking port requirements
+
+### Port requirements for Azure Stack Edge Pro 2
+
+The following table lists the ports that need to be opened in your firewall to allow for SMB, cloud, or management traffic. In this table, *in* or *inbound* refers to the direction from which incoming client requests access to your device. *Out* or *outbound* refers to the direction in which your Azure Stack Edge Pro 2 device sends data externally, beyond the deployment, for example, outbound to the internet.
++
+### Port requirements for IoT Edge
+
+Azure IoT Edge allows outbound communication from an on-premises Edge device to Azure cloud using supported IoT Hub protocols. Inbound communication is only required for specific scenarios where Azure IoT Hub needs to push down messages to the Azure IoT Edge device (for example, Cloud To Device messaging).
+
+Use the following table for port configuration for the servers hosting Azure IoT Edge runtime:
+
+| Port no. | In or out | Port scope | Required | Guidance |
+|-|--||-|-|
+| TCP 443 (HTTPS)| Out | WAN | Yes | Outbound open for IoT Edge provisioning. This configuration is required when using manual scripts or Azure IoT Device Provisioning Service (DPS).|
+
+For complete information, go to [Firewall and port configuration rules for IoT Edge deployment](../iot-edge/troubleshoot.md).
++
+### Port requirements for Kubernetes on Azure Stack Edge
+
+| Port no. | In or out | Port scope | Required | Guidance |
+|-|--||-|-|
+| TCP 31000 (HTTPS)| In | LAN | In some cases. <br> See notes. |This port is required only if you are connecting to the Kubernetes dashboard to monitor your device. |
+| TCP 6443 (HTTPS)| In | LAN | In some cases. <br> See notes. |This port is required by Kubernetes API server only if you are using `kubectl` to access your device. |
+
+> [!IMPORTANT]
+> If your datacenter firewall is restricting or filtering traffic based on source IPs or MAC addresses, make sure that the compute IPs (Kubernetes node IPs) and MAC addresses are in the allowed list. The MAC addresses can be specified by running the `Set-HcsMacAddressPool` cmdlet on the PowerShell interface of the device.
+
+## URL patterns for firewall rules
+
+Network administrators can often configure advanced firewall rules based on the URL patterns to filter the inbound and the outbound traffic. Your Azure Stack Edge Pro 2 device and the service depend on other Microsoft applications such as Azure Service Bus, Azure Active Directory Access Control, storage accounts, and Microsoft Update servers. The URL patterns associated with these applications can be used to configure firewall rules. It is important to understand that the URL patterns associated with these applications can change. These changes require the network administrator to monitor and update firewall rules for your Azure Stack Edge Pro 2 as and when needed.
+
+We recommend that you set your firewall rules for outbound traffic, based on Azure Stack Edge Pro 2 fixed IP addresses, liberally in most cases. However, you can use the information below to set advanced firewall rules that are needed to create secure environments.
+
+> [!NOTE]
+> - The device (source) IPs should always be set to all the cloud-enabled network interfaces.
+> - The destination IPs should be set to [Azure datacenter IP ranges](https://www.microsoft.com/download/confirmation.aspx?id=41653).
+
+### URL patterns for gateway feature
++
+### URL patterns for compute feature
+
+| URL pattern | Component or functionality |
+|-||
+| https:\//mcr.microsoft.com<br></br>https://\*.cdn.mscr.io | Microsoft container registry (required) |
+| https://\*.azurecr.io | Personal and third-party container registries (optional) |
+| https://\*.azure-devices.net | IoT Hub access (required) |
+| https://\*.docker.com | StorageClass (required) |
+
+### URL patterns for monitoring
+
+Add the following URL patterns for Azure Monitor if you're using the containerized version of the Log Analytics agent for Linux.
+
+| URL pattern | Port | Component or functionality |
+|-|-|-|
+| https://\*ods.opinsights.azure.com | 443 | Data ingestion |
+| https://\*.oms.opinsights.azure.com | 443 | Operations Management Suite (OMS) onboarding |
+| https://\*.dc.services.visualstudio.com | 443 | Agent telemetry that uses Azure Public Cloud Application Insights |
+
+For more information, see [Network firewall requirements for monitoring container insights](../azure-monitor/containers/container-insights-onboard.md#network-firewall-requirements).
+
+### URL patterns for gateway for Azure Government
++
+### URL patterns for compute for Azure Government
+
+| URL pattern | Component or functionality |
+|-||
+| https:\//mcr.microsoft.com<br></br>https://\*.cdn.mscr.com | Microsoft container registry (required) |
+| https://\*.azure-devices.us | IoT Hub access (required) |
+| https://\*.azurecr.us | Personal and third-party container registries (optional) |
+
+### URL patterns for monitoring for Azure Government
+
+Add the following URL patterns for Azure Monitor if you're using the containerized version of the Log Analytics agent for Linux.
+
+| URL pattern | Port | Component or functionality |
+|-|-|-|
+| https://\*ods.opinsights.azure.us | 443 | Data ingestion |
+| https://\*.oms.opinsights.azure.us | 443 | Operations Management Suite (OMS) onboarding |
+| https://\*.dc.services.visualstudio.com | 443 | Agent telemetry that uses Azure Public Cloud Application Insights |
++
+## Internet bandwidth
++
+## Compute sizing considerations
+
+Use your experience while developing and testing your solution to ensure there is enough capacity on your Azure Stack Edge Pro 2 device and you get the optimal performance from your device.
+
+Factors you should consider include:
+
+- **Container specifics** - Think about the following.
+
+ - What is your container footprint? How much memory, storage, and CPU is your container consuming?
+ - How many containers are in your workload? You could have a lot of lightweight containers versus a few resource-intensive ones.
+ - What are the resources allocated to these containers versus what are the resources they are consuming (the footprint)?
+ - How many layers do your containers share? Container images are a bundle of files organized into a stack of layers. For your container image, determine how many layers and their respective sizes to calculate resource consumption.
+ - Are there unused containers? A stopped container still takes up disk space.
+ - In which language are your containers written?
+- **Size of the data processed** - How much data will your containers be processing? Will this data consume disk space or the data will be processed in the memory?
+- **Expected performance** - What are the desired performance characteristics of your solution?
+
+To understand and refine the performance of your solution, you could use:
+
+- The compute metrics available in the Azure portal. Go to your Azure Stack Edge resource and then go to **Monitoring > Metrics**. Look at the **Edge compute - Memory usage** and **Edge compute - Percentage CPU** to understand the available resources and how are the resources getting consumed.
+- To monitor and troubleshoot compute modules, go to [Debug Kubernetes issues](azure-stack-edge-gpu-connect-powershell-interface.md#debug-kubernetes-issues-related-to-iot-edge).
+
+Finally, make sure that you validate your solution on your dataset and quantify the performance on Azure Stack Edge Pro 2 before deploying in production.
+
+## Next step
+
+- [Deploy your Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md)
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
+
+ Title: Microsoft Azure Stack Edge Pro 2 technical specifications and compliance| Microsoft Docs
+description: Learn about the technical specifications and compliance for your Azure Stack Edge Pro 2 device
++++++ Last updated : 03/03/2022+++
+# Technical specifications and compliance for Azure Stack Edge Pro 2
+
+The hardware components of your Azure Stack Edge Pro 2 adhere to the technical specifications and regulatory standards outlined in this article. The technical specifications describe hardware, power supply units (PSUs), storage capacity, and enclosures.
+
+## Compute and memory specifications
+
+The Azure Stack Edge Pro 2 device has the following specifications for compute and memory:
+
+| Specification | Value |
+|-|--|
+| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU|
+| CPU: raw | 20 total cores, 40 total vCPUs |
+| CPU: usable | 32 vCPUs |
+| Memory type | Model 64G2T: 64 GB |
+| Memory: raw | Model 64G2T: 64 GB RAM |
+| Memory: usable | Model 64G2T: 51 GB RAM |
+
+## Power supply unit specifications
+
+This device has one power supply unit (PSU) with high-performance fans. The following table lists the technical specifications of the PSUs.
+
+| Specification | 550 W PSU |
+|-|-|
+| Maximum output power | 550 W |
+| Heat dissipation (maximum) | 550 W |
+| Voltage range selection | 100-127 V AC, 47-63 Hz, 7.1 A |
+| Voltage range selection | 200-240V AC, 47-63 Hz, 3.4 A |
+| Hot pluggable | No |
++
+## Network interface specifications
+
+Your Azure Stack Edge Pro 2 device has four network interfaces, Port 1 - Port 4.
+
+* **2 X 10 GBase-T/1000Base-T(10/1 GbE) interfaces**
+ * Port 1 is used for initial setup and is static by default. After the initial setup is complete, you can use the interface for data with any IP address. However, on reset, the interface reverts back to static IP.
+ * Port 2 is user configurable, can be used for data transfer, and is DHCP by default. These 10/1-GbE interfaces can also operate as 10-GbE interfaces.
+* **2 X 100-GbE interfaces**
+ * These data interfaces, Port 3 and Port 4, can be configured by user as DHCP (default) or static.
++
+Your Azure Stack Edge Pro 2 device has the following network hardware:
+
+* **Onboard Intel Ethernet network adapter X722** - Port 1 and Port 2. [See here for details.](https://www.intel.com/content/www/us/en/ethernet-products/network-adapters/ethernet-x722-brief.html)
+* **Nvidia Mellanox dual port 100-GbE ConnectX-6 Dx network adapter** - Port 3 and Port 4. [See here for details.](https://www.nvidia.com/en-us/networking/ethernet/connectx-6-dx/)
+
+Here are the details for the Mellanox card:
+
+| Parameter | Description |
+|-|-|
+| Model | ConnectX®-6 Dx network interface card |
+| Model Description | 100 GbE dual-port QSFP56 |
+| Device Part Number | MCX623106AC-CDAT, with crypto or with secure boot |
+
+## Storage specifications
+
+The following table lists the storage capacity of the device.
+
+| Specification | Value |
+|-|--|
+| Number of data disks | 2 SATA SSDs |
+| Single data disk capacity | 960 GB |
+| Boot disk | 1 NVMe SSD |
+| Boot disk capacity | 960 GB |
+| Total capacity | Model 64G2T: 2 TB |
+| Total usable capacity | Model 64G2T: 720 GB |
+| RAID configuration | [Storage Spaces Direct with mirroring](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#mirroring) |
++
+## Enclosure dimensions and weight specifications
+
+The following tables list the various enclosure specifications for dimensions and weight.
+
+### Enclosure dimensions
+
+The following table lists the dimensions of the 2U device enclosure in millimeters and inches.
+
+| Enclosure | Millimeters | Inches |
+|-||-|
+| Height | 87.0 | 3.425 |
+| Width | 482.6 | 19.00 |
+| Depth | 430.5 | 16.95 |
+
+The following table lists the dimensions of the shipping package in millimeters and inches.
+
+| Package | Millimeters | Inches |
+|-||-|
+| Height | 241.3 | 9.50 |
+| Width | 768.4 | 30.25 |
+| Depth | 616.0 | 24.25 |
+
+### Enclosure weight
+
+| Line # | Hardware | Weight lbs |
+|--|||
+| 1 | Model 642GT | 21 |
+| | | |
+| 2 | Shipping weight, with 4-post mount | 35.3 |
+| 3 | Model 642GT install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
+| 4 | 4-post in box | 6.28 |
+| | | |
+| 5 | Shipping weight, with 2-post mount | 32.1 |
+| 6 | Model 642GT install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| 7 | 2-post in box | 3.08 |
+| | | |
+| 8 | Shipping weight with wall mount | 31.1 |
+| 9 | Model 642GT install handling without bezel | 19.8 |
+| 10 | Wallmount as packaged | 2.16 |
+++
+## Next steps
+
+[Deploy your Azure Stack Edge Pro 2](azure-stack-edge-pro-2-deploy-prep.md)
databox-online Azure Stack Edge Pro 2 Two Post Rack Mounting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-two-post-rack-mounting.md
+
+ Title: Two-post rack mount Azure Stack Edge Pro 2 physical device
+description: This article contains instructions on how to rack the Azure Stack Edge Pro 2 device using a two-post mount.
++++++ Last updated : 02/23/2022+++
+# Tutorial: Rack the Azure Stack Edge Pro 2 using a two-post mount
+
+Azure Stack Edge Pro 2 is the next generation of an AI-enabled edge computing device that can transfer data over the network. This device is a part of the Hardware-as-a-service solution offered by Microsoft.
+
+The device must be installed on a standard 19-inch rack. Use the following procedure to rack mount your device on a standard 19-inch rack using a two-post mount.
+
+## Prerequisites
+
+* Before you begin, read the safety instructions in your Safety, Environmental, and Regulatory Information booklet. This booklet was shipped with the device.
+* Begin installing the rails in the allotted space that is closest to the bottom of the rack enclosure.
+* For the rack mounting configuration, you need to supply:
+ * Phillips-head screwdriver
+
+
+
+## Identify the rail kit contents
+
+* Inner rail
+* Chassis
+* The following screws and nuts:
+
+ | Rack Type | Screws | Nuts |
+ |-|--||
+ | Square hole | :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-screw-square-hole.png" alt-text="Image of square hole screw.":::M6X13 (8) | :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-nut-square-hole.png" alt-text="Image of square hole nut.":::M6 (8) |
+ | Round hole | :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-screw-round-hole.png" alt-text="Image of round hole screw.":::M5X13 (8) | :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-nut-round-hole.png" alt-text="Image of round hole nut.":::M5 (8) |
+
+## Install and remove rails
+
+1. Remove the inner rail. Pull the tab forward and take out the inner rail.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-remove-inner-rail.png" alt-text="Remove inner rail.":::
+
+1. Install the inner rail onto the chassis. **Make sure to fasten the inner rail screw.**
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-install-inner-rail-onto-chassis.png" alt-text="Install inner rail onto chassis.":::
+
+1. Identify Bracket B:
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-bracket-b.png" alt-text="Diagram of Bracket B.":::
+
+ Adjust the fastening position of Bracket B. Release the retainer by pulling and moving the retainer hook.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-adjust-bracket-b.png" alt-text="Adjust Bracket B fastening position.":::
++
+1. Moving the retainer, loosen the screw by the oval holes on the retainer (no need to detach the screw). When fastening the rear bracket, use the oval hole on the retainer. Don't use other holes on the retainer.
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-loosen-screw.png" alt-text="Loosen screws by the oval holes.":::
++
+1. Move Bracket B to the needed position and fasten the screw.
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-move-bracket-b.png" alt-text="Move Bracket B.":::
+
+1. Move and hook the retainer back to the front.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-two-post-rack-mounting/icon-front-rear-bracket.png" alt-text="Instructions for front and rear brackets.":::
+
+1. Insert the chassis to complete the installation.
+ 1. Ensure the ball bearing retainer is located at the front of the middle rail (reference diagram 1 and 2).
+ 1. Insert the chassis into the middle-outer rails (reference diagram 3).
+ 1. When you hit a stop, pull/push the blue release tab on the inner rails (reference diagram 4).
+ 1. Tighten the M5 screws of the chassis to the rail once the server is seated (reference diagram 5).
+
+## Remove the chassis
+1. Loosen the M5 screws of the chassis.
+1. Pull out the chassis.
+1. Press the disconnect tab forward to remove the chassis.
databox-online Azure Stack Edge Pro 2 Wall Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-wall-mount.md
+
+ Title: Wall mount Azure Stack Edge Pro 2 physical device
+description: This article contains instructions on how to rack the Azure Stack Edge Pro 2 device using a wall mount.
++++++ Last updated : 02/23/2022++
+# Tutorial: Wall mount the Azure Stack Edge Pro 2
+
+Azure Stack Edge Pro 2 is the next generation of an AI-enabled edge computing device that can transfer data over the network. This device is a part of the Hardware-as-a-service solution offered by Microsoft.
+
+The Azure Stack Edge Pro 2 device can be mounted on the wall. This article contains instructions to wall mount your device.
+
+## Prerequisites
+
+* Before you begin, read the safety instructions in your Safety, Environmental, and Regulatory Information booklet. This booklet was shipped with the device.
+* Begin installing the rails in the allotted space that is closest to the bottom of the rack enclosure.
+
+## Identify the rail kit contents
+
+* 1 X vertical wall mount bracket
+* 8 X M5 cage nuts
+* 8 X M5 screws
+* 6 X wood screws
+* 1 X instruction manual
++
+## Install the wall mount
+
+The unit can be mounted either vertically or horizontally (for example, under a desk) onto a wall.
+
+1. Make sure the mounting surface is sturdy enough to support the weight of the rack, plus all of the equipment to be installed into the rack. Fit the rack to the surface to test and ensure proper fit and mark the six mounting points.
+
+1. Use the provided self-tapping screws (for wood surfaces only) to affix the unit to the mounting surface.
+
+ :::image type="content" source="media/azure-stack-edge-pro-2-wall-mount/icon-wall-mount-self-tapping-screws.png" alt-text="Self tapping screws diagram.":::
+
+1. Once the rack is properly mounted to the surface, the rack mountable equipment can be installed. Use the supplied square cage nuts to provide the mounting points for the rack mountable equipment, then use the supplied cabinet screws to install the equipment into the rack.
databox-online Azure Stack Edge Pro R Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-security.md
Title: Azure Stack Edge Pro R security | Microsoft Docs
-description: Describes the security and privacy features that protect your Azure Stack Edge Pro R and Azure Stack Edge Mini R devices, service, and data on-premises and in the cloud.
+description: Describes the security and privacy features that protect your Azure Stack Edge Pro 2, Azure Stack Edge Pro R and Azure Stack Edge Mini R devices, service, and data on-premises and in the cloud.
Previously updated : 06/03/2021 Last updated : 02/25/2022
-# Security and data protection for Azure Stack Edge Pro R and Azure Stack Edge Mini R
+# Security and data protection for Azure Stack Edge Pro 2, Azure Stack Edge Pro R, and Azure Stack Edge Mini R
Security is a major concern when you're adopting a new technology, especially if the technology is used with confidential or proprietary data. Azure Stack Edge Pro R and Azure Stack Edge Mini R help you ensure that only authorized entities can view, modify, or delete your data.
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud Previously updated : 03/01/2022 Last updated : 03/03/2022 # Security alerts - a reference guide
Microsoft Defender for Containers provides security alerts on the cluster level
| Alert (alert type) | Description | MITRE tactics<br>([Learn more](#intentions)) | Severity | ||-|:--:||
+| **PREVIEW - Access from a suspicious application**<br>(Storage.Blob_SuspiciousApp) | Indicates that a suspicious application has successfully accessed a container of a storage account with authentication.<br>This might indicate that an attacker has obtained the credentials necessary to access the account, and is exploiting it. This could also be an indication of a penetration test carried out in your organization.<br>Applies to: Azure Blob Storage, Azure Data Lake Storage Gen2 | Initial Access | Medium |
| **Access from a suspicious IP address**<br>(Storage.Blob_SuspiciousIp<br>Storage.Files_SuspiciousIp) | Indicates that this storage account has been successfully accessed from an IP address that is considered suspicious. This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files, Azure Data Lake Storage Gen2 | Initial Access | Medium | | **PREVIEW ΓÇô Phishing content hosted on a storage account**<br>(Storage.Blob_PhishingContent<br>Storage.Files_PhishingContent) | A URL used in a phishing attack points to your Azure Storage account. This URL was part of a phishing attack affecting users of Microsoft 365.<br>Typically, content hosted on such pages is designed to trick visitors into entering their corporate credentials or financial information into a web form that looks legitimate.<br>This alert is powered by Microsoft Threat Intelligence.<br>Learn more about [Microsoft's threat intelligence capabilities](https://go.microsoft.com/fwlink/?linkid=2128684).<br>Applies to: Azure Blob Storage, Azure Files | Collection | High | | **PREVIEW - Storage account identified as source for distribution of malware**<br>(Storage.Files_WidespreadeAm) | Antimalware alerts indicate that an infected file(s) is stored in an Azure file share that is mounted to multiple VMs. If attackers gain access to a VM with a mounted Azure file share, they can use it to spread malware to other VMs that mount the same share.<br>Applies to: Azure Files | Lateral Movement, Execution | High |
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-cicd.md
To enable vulnerability scans of images in your GitHub workflows:
> The push to the registry must happen prior to the results being published. ```yml
- - run: |
- echo "github.sha=$GITHUB_SHA"
- docker build -t githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
+ - name: Build and Tag Image
+ run: |
+ echo "github.sha=$GITHUB_SHA"
+ docker build -t githubdemo1.azurecr.io/k8sdemo:${{ github.sha }} .
- uses: Azure/container-scan@v0 name: Scan image for vulnerabilities
To enable vulnerability scans of images in your GitHub workflows:
with: image-name: githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
- - name: Push Docker image - githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
+ - name: Push Docker image
run: |
- docker push githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
+ docker push githubdemo1.azurecr.io/k8sdemo:${{ github.sha }}
- name: Post logs to appinsights uses: Azure/publish-security-assessments@v0
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Defender for Cloud pulls the image from the registry and runs it in an isolated
Defender for Cloud filters and classifies findings from the scanner. When an image is healthy, Defender for Cloud marks it as such. Defender for Cloud generates security recommendations only for images that have issues to be resolved. By only notifying you when there are problems, Defender for Cloud reduces the potential for unwanted informational alerts. ### Can I get the scan results via REST API?
-Yes. The results are under [Sub-Assessments Rest API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
+Yes. The results are under [Sub-Assessments REST API](/rest/api/securitycenter/subassessments/list/). Also, you can use Azure Resource Graph (ARG), the Kusto-like API for all of your resources: a query can fetch a specific scan.
### What registry types are scanned? What types are billed? For a list of the types of container registries supported by Microsoft Defender for container registries, see [Availability](#availability).
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
If you want to create custom roles that can work with JIT, you'll need the detai
| To enable a user to: | Permissions to set| | | | |Configure or edit a JIT policy for a VM | *Assign these actions to the role:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/write` </li><li> On the scope of a subscription or resource group of VM: <br/>`Microsoft.Compute/virtualMachines/write`</li></ul> |
-|Request JIT access to a VM | *Assign these actions to the user:* <ul><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li>On the scope of a subscription or resource group that is associated with the VM:<br/> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> On the scope of a subscription or resource group or VM:<br/> `Microsoft.Compute/virtualMachines/read` </li><li> On the scope of a subscription or resource group or VM:<br/> `Microsoft.Network/networkInterfaces/*/read` </li></ul>|
+|Request JIT access to a VM | *Assign these actions to the user:* <ul><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action` </li><li> `Microsoft.Security/locations/jitNetworkAccessPolicies/*/read` </li><li> `Microsoft.Compute/virtualMachines/read` </li><li> `Microsoft.Network/networkInterfaces/*/read` </li> <li> `Microsoft.Network/publicIPAddresses/read` </li></ul> |
|Read JIT policies| *Assign these actions to the user:* <ul><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/read`</li><li>`Microsoft.Security/locations/jitNetworkAccessPolicies/initiate/action`</li><li>`Microsoft.Security/policies/read`</li><li>`Microsoft.Security/pricings/read`</li><li>`Microsoft.Compute/virtualMachines/read`</li><li>`Microsoft.Network/*/read`</li>| ||| --- ## Next steps This page explained _why_ just-in-time (JIT) virtual machine (VM) access should be used. To learn about _how_ to enable JIT and request access to your JIT-enabled VMs, see the following:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/02/2022 Last updated : 03/03/2022 # What's new in Microsoft Defender for Cloud?
Learn how to [enable your database security at the subscription level](quickstar
Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features. + ## January 2022 Updates in January include:
digital-twins How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-move-regions.md
First, open **Azure Digital Twins Explorer** for your Azure Digital Twins instan
Selecting this button will open an Azure Digital Twins Explorer window connected to this instance. Follow the Azure Digital Twins Explorer instructions to [Export graph and models](how-to-use-azure-digital-twins-explorer.md#export-graph-and-models). Following these instructions will let you download a JSON file to your machine that contains the code for your models, twins, and relationships (including models that aren't currently being used in the graph).
Import the [JSON file that you downloaded](#download-models-twins-and-graph-with
To verify everything was uploaded successfully, switch back to the **Twin Graph** tab and select the **Run Query** button in the **Query Explorer** panel to run the default query that displays all twins and relationships in the graph. This action also refreshes the list of models in the **Models** panel. You should see your graph with all its twins and relationships displayed in the **Twin Graph** panel. You should also see your models listed in the **Models** panel.
digital-twins Quickstart Azure Digital Twins Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-azure-digital-twins-explorer.md
For this quickstart, the model files are already written and validated for you.
Follow these steps to upload models (the *.json* files you downloaded earlier).
-1. In the **Models** panel, select the **Upload a Model** icon that shows an arrow pointing into a cloud.
+1. In the **Models** panel, select the **Upload a Model** icon that shows an arrow pointing upwards.
:::image type="content" source="media/quickstart-azure-digital-twins-explorer/upload-model.png" alt-text="Screenshot of the Azure Digital Twins Explorer, highlighting the Models panel and the 'Upload a Model' icon in it." lightbox="media/quickstart-azure-digital-twins-explorer/upload-model.png"::: 1. In the Open window that appears, navigate to the folder containing the **Room.json** and **Floor.json** files that you downloaded earlier. 1. Select **Room.json** and **Floor.json**, and select **Open** to upload them both.
-Azure Digital Twins Explorer will upload these model files to your Azure Digital Twins instance. They should show up in the **Models** panel and display their friendly names and full model IDs. You can select the **View Model** information icons to see the DTDL code behind them.
+Azure Digital Twins Explorer will upload these model files to your Azure Digital Twins instance. They should show up in the **Models** panel and display their friendly names and full model IDs. You can select **View Model** for either model to see the DTDL code behind it.
:::row::: :::column:::
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
1. In the **Twin Graph** panel, select the **Import Graph** icon that shows an arrow pointing into a cloud.
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/import-graph.png" alt-text="Screenshot of the Azure Digital Twins Explorer showing the Graph View panel, with the 'Import Graph' icon highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/import-graph.png":::
+ :::image type="content" source="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-import.png" alt-text="Screenshot of Azure Digital Twins Explorer Twin Graph panel. The Import Graph button is highlighted." lightbox="media/how-to-use-azure-digital-twins-explorer/twin-graph-panel-import.png":::
2. In the Open window, navigate to the **buildingScenario.xlsx** file you downloaded earlier. This file contains a description of the sample graph. Select **Open**.
Follow these steps to import the graph (the *.xlsx* file you downloaded earlier)
:::column-end::: :::row-end:::
-5. The graph has now been uploaded to Azure Digital Twins Explorer. Switch back to the **Twin Graph** panel.
-
- :::image type="content" source="media/quickstart-azure-digital-twins-explorer/twin-graph-tab.png" alt-text="Screenshot of the Azure Digital Twins Explorer with the Twin Graph tab highlighted." lightbox="media/quickstart-azure-digital-twins-explorer/twin-graph-tab.png":::
+ The graph has now been uploaded to Azure Digital Twins Explorer, and the **Twin Graph** panel will reload. It will appear empty.
6. To see the graph, select the **Run Query** button in the **Query Explorer** panel, near the top of the Azure Digital Twins Explorer window.
dms Migrate Mysql To Azure Mysql Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/migrate-mysql-to-azure-mysql-powershell.md
In this article, you migrate a MySQL database restored to an on-premises instanc
> [!NOTE]
-> Currently it is not possible to run complete database migration using the Az.DataMigration module. In the meantime, the sample PowerShell script is provided "as-is" that uses the [DMS Rest API](/rest/api/datamigration/tasks/get) and allows you to automate migration. This script will be modified or deprecated, once official support is added in the Az.DataMigration module and Azure CLI.
+> Currently it is not possible to run complete database migration using the Az.DataMigration module. In the meantime, the sample PowerShell script is provided "as-is" that uses the [DMS REST API](/rest/api/datamigration/tasks/get) and allows you to automate migration. This script will be modified or deprecated, once official support is added in the Az.DataMigration module and Azure CLI.
> [!NOTE] > Amazon Relational Database Service (RDS) for MySQL and Amazon Aurora (MySQL-based) are also supported as sources for migration.
expressroute Expressroute Optimize Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-optimize-routing.md
There are two solutions to the problem. The first one is that you simply adverti
The second solution is that you continue to advertise both of the prefixes on both ExpressRoute circuits, and in addition you give us a hint of which prefix is close to which one of your offices. Because we support BGP AS Path prepending, you can configure the AS Path for your prefix to influence routing. In this example, you can lengthen the AS PATH for 172.2.0.0/31 in US East so that we will prefer the ExpressRoute circuit in US West for traffic destined for this prefix (as our network will think the path to this prefix is shorter in the west). Similarly you can lengthen the AS PATH for 172.2.0.2/31 in US West so that we'll prefer the ExpressRoute circuit in US East. Routing is optimized for both offices. With this design, if one ExpressRoute circuit is broken, Exchange Online can still reach you via another ExpressRoute circuit and your WAN. > [!IMPORTANT]
-> We remove private AS numbers in the AS PATH for the prefixes received on Microsoft Peering when peering using a private AS number. You need to peer with a public AS and append public AS numbers in the AS PATH to influence routing for Microsoft Peering.
+> We remove private AS numbers in the AS PATH for the prefixes received on Microsoft Peering and Private Peering when peering using a private AS number. You need to peer with a public AS and append public AS numbers in the AS PATH to influence routing for Microsoft Peering.
> >
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In PowerShell, run the following command: ```azurepowershell-interactive
- New-AzADServicePrincipal -ApplicationId "205478c0-bd83-4e1b-a9d6-db63a3e1e1c8" -Role Contributor
+ New-AzADServicePrincipal -ApplicationId "ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037" -Role Contributor
``` ##### Azure CLI
Register the service principal for Azure Front Door as an app in your Azure Acti
2. In CLI, run the following command: ```azurecli-interactive
- az ad sp create --id 205478c0-bd83-4e1b-a9d6-db63a3e1e1c8 --role Contributor
+ az ad sp create --id ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037 --role Contributor
``` #### Grant Azure Front Door access to your key vault
Grant Azure Front Door permission to access the certificates in your Azure Key
1. In your key vault account, under SETTINGS, select **Access policies**, then select **Add new** to create a new policy.
-2. In **Select principal**, search for **205478c0-bd83-4e1b-a9d6-db63a3e1e1c8**, and choose **Microsoft.Azure.Frontdoor**. Click **Select**.
+2. In **Select principal**, search for **ad0e1c7e-6d38-4ba4-9efd-0bc77ba9f037**, and choose **Microsoft.Azure.Frontdoor**. Click **Select**.
3. In **Secret permissions**, select **Get** to allow Front Door to retrieve the certificate.
governance Get Resource Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/how-to/get-resource-changes.md
Title: Get resource changes description: Understand how to find when a resource was changed and query the list of resource configuration changes at scale Previously updated : 01/27/2022 Last updated : 02/18/2022 # Get resource changes
Monitor.
## Find detected change events and view change details
-When a resource is created, updated, or deleted, a new change resource (Microsoft.Resources/changes) is created to extend the modified resource and represent the changed properties.
+When a resource is created, updated, or deleted, a new change resource (Microsoft.Resources/changes) is created to extend the modified resource and represent the changed properties. Change records should be available in under five minutes.
Example change resource property bag:
Each change resource has the following properties:
- **previousResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the previous state of the resource. - **newResourceSnapshotId** - Contains the ID of the resource snapshot that was used as the new state of the resource.
-## Resource Graph Query samples
+## How to query changes using Resource Graph
+### Prerequisites
+- To enable Azure PowerShell to query Azure Resource Graph, the [module must be added](../first-query-powershell.md#add-the-resource-graph-module).
+- To enable Azure CLI to query Azure Resource Graph, the [extension must be added](../first-query-azurecli.md#add-the-resource-graph-extension).
-With Resource Graph, you can query the **ResourceChanges** table to filter or sort by any of the change resource properties:
+### Run your Resource Graph query
+It's time to try out a tenant-based Resource Graph query of the **resourcechanges** table. The query returns the first five most recent Azure resource changes with the change time, change type, target resource ID, target resource type, and change details of each change record. To query by
+[management group](../../management-groups/overview.md) or subscription, use the `-ManagementGroup`
+or `-Subscription` parameters.
+
+1. Run your first Azure Resource Graph query:
+
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive
+ # Login first with az login if not using Cloud Shell
+
+ # Run Azure Resource Graph query
+ az graph query -q 'resourcechanges | project properties.changeAttributes.timestamp, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | limit 5'
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive
+ # Login first with Connect-AzAccount if not using Cloud Shell
+
+ # Run Azure Resource Graph query
+ Search-AzGraph -Query 'resourcechanges | project properties.changeAttributes.timestamp, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | limit 5'
+ ```
+
+# [Portal](#tab/azure-portal)
+ Open the [Azure portal](https://portal.azure.com) to find and use the Resource Graph Explorer
+ following these steps to run your first Resource Graph query:
+
+ 1. Select **All services** in the left pane. Search for and select **Resource Graph Explorer**.
+
+ 1. In the **Query 1** portion of the window, enter the query
+ ```kusto
+ resourcechanges
+ | project properties.changeAttributes.timestamp, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes
+ | limit 5
+ ```
+ and select **Run query**.
+
+ 1. Review the query response in the **Results** tab. Select the **Messages** tab to see details
+ about the query, including the count of results and duration of the query. Errors, if any, are
+ displayed under this tab.
+
++
+ > [!NOTE]
+ > As this query example doesn't provide a sort modifier such as `order by`, running this query
+ > multiple times is likely to yield a different set of resources per request.
++
+2. Update the query to specify a more user-friendly column name for the **timestamp** property:
+
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive
+ # Run Azure Resource Graph query with 'extend' to define a user-friendly name for properties.changeAttributes.timestamp
+ az graph query -q 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | limit 5'
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive
+ # Run Azure Resource Graph query with 'extend' to define a user-friendly name for properties.changeAttributes.timestamp
+ Search-AzGraph -Query 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | limit 5'
+ ```
+
+# [Portal](#tab/azure-portal)
+ ```kusto
+ resourcechanges
+ | extend changeTime=todatetime(properties.changeAttributes.timestamp)
+ | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes
+ | limit 5
+ ```
+ Then, select **Run query**.
++++
+3. To get the most recent changes, update the query to `order by` the user-defined **changeTime** property:
+
+# [Azure CLI](#tab/azure-cli)
+ ```azurecli-interactive
+ # Run Azure Resource Graph query with 'order by'
+ az graph query -q 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | order by changeTime desc | limit 5'
+ ```
+
+# [PowerShell](#tab/azure-powershell)
+ ```azurepowershell-interactive
+ # Run Azure Resource Graph query with 'order by'
+ Search-AzGraph -Query 'resourcechanges | extend changeTime=todatetime(properties.changeAttributes.timestamp) | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes | order by changeTime desc | limit 5'
+ ```
+
+# [Portal](#tab/azure-portal)
+ ```kusto
+ resourcechanges
+ | extend changeTime=todatetime(properties.changeAttributes.timestamp)
+ | project changeTime, properties.changeType, properties.targetResourceId, properties.targetResourceType, properties.changes
+ | order by changeTime desc
+ | limit 5
+ ```
+ Then, select **Run query**.
+
++
+ > [!NOTE]
+ > The order of the query commands is important. In this example,
+ > the `order by` must come before the `limit` command. This command order first orders the query results by the change time and
+ > then limits them to ensure that you get the five *most recent* results.
++
+When the final query is run several times, assuming that nothing in your environment is changing,
+the results returned are consistent and ordered by the **properties.changeAttributes.timestamp** (or your user-defined name of **changeTime**) property, but still limited to the
+top five results.
++
+> [!NOTE]
+> If the query does not return results from a subscription you already have access to, then note
+> that the `Search-AzGraph` PowerShell cmdlet defaults to subscriptions in the default context. To see the list of
+> subscription IDs which are part of the default context run this
+> `(Get-AzContext).Account.ExtendedProperties.Subscriptions` If you wish to search across all the
+> subscriptions you have access to, one can set the PSDefaultParameterValues for `Search-AzGraph`
+> cmdlet by running
+> `$PSDefaultParameterValues=@{"Search-AzGraph:Subscription"= $(Get-AzSubscription).ID}`
+
+Resource Graph Explorer also provides a clean interface for converting the results of some queries into a chart that can be pinned to an Azure dashboard.
+- [Create a chart from the Resource Graph query](../first-query-portal.md#create-a-chart-from-the-resource-graph-query)
+- [Pin the query visualization to a dashboard](../first-query-portal.md#pin-the-query-visualization-to-a-dashboard)
+
+## Resource Graph query samples
+
+With Resource Graph, you can query the **resourcechanges** table to filter or sort by any of the change resource properties:
### All changes in the past one day ```kusto
-ResourceChanges
+resourcechanges
| extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId,  changedProperties = properties.changes, changeCount = properties.changeAttributes.changesCount
changedProperties = properties.changes, changeCount = properties.changeAttr
### Resources deleted in a specific resource group ```kusto
-ResourceChanges
+resourcechanges
| where resourceGroup == "myResourceGroup" | extend changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), correlationId = properties.changeAttributes.correlationId
changeType = tostring(properties.changeType), correlationId = properties.ch
### Changes to a specific property value ```kusto
-ResourceChanges
+resourcechanges
| extend provisioningStateChange = properties.changes["properties.provisioningState"], changeTime = todatetime(properties.changeAttributes.timestamp), targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType) | where isnotempty(provisioningStateChange)and provisioningStateChange.newValue == "Succeeded" | order by changeTime desc
ResourceChanges
### Query the latest resource configuration for resources created in the last seven days ```kusto
-ResourceChanges
+resourcechanges
| extend targetResourceId = tostring(properties.targetResourceId), changeType = tostring(properties.changeType), changeTime = todatetime(properties.changeAttributes.timestamp) | where changeTime > ago(7d) and changeType == "Create" | project targetResourceId, changeType, changeTime
hdinsight Apache Hadoop Use Hive Curl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-use-hive-curl.md
Learn how to use the WebHCat REST API to run Apache Hive queries with Apache Had
* If you use Bash, you'll also need jq, a command-line JSON processor. See [https://stedolan.github.io/jq/](https://stedolan.github.io/jq/).
-## Base URI for Rest API
+## Base URI for REST API
The base Uniform Resource Identifier (URI) for the REST API on HDInsight is `https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters/CLUSTERNAME`, where `CLUSTERNAME` is the name of your cluster. Cluster names in URIs are **case-sensitive**. While the cluster name in the fully qualified domain name (FQDN) part of the URI (`CLUSTERNAME.azurehdinsight.net`) is case-insensitive, other occurrences in the URI are case-sensitive.
hdinsight Hdinsight Hadoop Manage Ambari Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-manage-ambari-rest-api.md
Apache Ambari simplifies the management and monitoring of Hadoop clusters by pro
* Windows PowerShell. Or you can use Bash.
-## Base Uniform Resource Identifier for Ambari Rest API
+## Base Uniform Resource Identifier for Ambari REST API
The base Uniform Resource Identifier (URI) for the Ambari REST API on HDInsight is `https://CLUSTERNAME.azurehdinsight.net/api/v1/clusters/CLUSTERNAME`, where `CLUSTERNAME` is the name of your cluster. Cluster names in URIs are **case-sensitive**. While the cluster name in the fully qualified domain name (FQDN) part of the URI (`CLUSTERNAME.azurehdinsight.net`) is case-insensitive, other occurrences in the URI are case-sensitive.
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
This release applies both for HDInsight 3.6 and 4.0.
HDInsight Identity Broker (HIB) enables users to sign in to Apache Ambari using multi-factor authentication (MFA) and get the required Kerberos tickets without needing password hashes in Azure Active Directory Domain Services (AAD-DS). Currently HIB is only available for clusters deployed through Azure Resource Management (ARM) template.
-#### Kafka Rest API Proxy (Preview)
+#### Kafka REST API Proxy (Preview)
-Kafka Rest API Proxy provides one-click deployment of highly available REST proxy with Kafka cluster via secured Azure AD authorization and OAuth protocol.
+Kafka REST API Proxy provides one-click deployment of highly available REST proxy with Kafka cluster via secured Azure AD authorization and OAuth protocol.
#### Auto scale
Fixed issues represent selected issues that were previously logged via Hortonwor
**Workaround**
- - **Option \#1: Create/Update policy via Ranger Rest API**
+ - **Option \#1: Create/Update policy via Ranger REST API**
REST URL: http://&lt;host&gt;:6080/service/plugins/policies
hdinsight Interactive Query Troubleshoot Migrate 36 To 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md
from uuid_test
It's caused by the difference of WebHCat(Templeton) between HDInsight 3.6 and HDInsight 4.0.
-* Hive Rest API - add ```arg=--showHeader=false -d arg=--outputformat=tsv2 -d```
+* Hive REST API - add ```arg=--showHeader=false -d arg=--outputformat=tsv2 -d```
* .NET SDK - initialize args of ```HiveJobSubmissionParameters``` ```csharp
hpc-cache Move Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/move-resource.md
+
+ Title: Move an Azure HPC Cache to a different region
+description: Information about how to move or recreate an Azure HPC Cache in another region
++++ Last updated : 03/03/2022+
+#Customer intent: As an HPC Cache administrator, I want to move a cache to another region so that it can be used with different services or provide failover for another cache instance.
++
+# How to move an Azure HPC Cache to another region
+
+This article describes how to move Azure HPC Cache resources to a different Azure region. You might want to move your workflow to a different region in order to take advantage of different services that are available there, or to access storage accounts in that region. Moving also can be necessary to meet policy requirements or for capacity planning.
+
+Each HPC Cache is tied to the region where it was created, so it can't be moved directly. Instead, you can create a duplicate HPC Cache in the new region and delete the original cache.
+
+A duplicate HPC Cache in a different Azure region also can be part of a failover recovery strategy, as explained in [Use multiple caches for regional failover recovery](hpc-region-recovery.md).
+
+## Prerequisites
+
+Before you create a replacement HPC Cache in another region, make careful notes of these items from the original cache so that you can replicate them in the new cache.
+
+* Details of the virtual network and subnet structure
+* Storage target details, names, and namespace paths
+* The mount command used by cache clients
+* Structure and names of blob storage containers, if you also need to move them to the new region
+* Details of any Azure Monitor alerts configured for your cache
+
+## Prepare
+
+To prepare to create a copy of an Azure HPC Cache in a new region, you can download an [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) from your existing cache. In the Azure portal, use the **Export template** page in the **Automation** section of the left menu to create a template.
+
+If you originally created the cache from script or from an existing template, you also can reuse those methods to create a replica cache in the new region.
+
+### Create network and storage infrastructure (if needed)
+
+In the new region, move or recreate the infrastructure needed for the cache.
+
+Make sure your new region has a virtual network to hold the cache, and the required subnets. Depending on your configuration, you might need to move or re-create Blob containers for your storage targets.
+
+Confirm that the new resources meet all of the requirements described in the cache [Prerequisites](hpc-cache-prerequisites.md) article.
+
+### Shut down the cache
+
+Before moving the cache, stop the cache and disconnect clients. Follow these steps:
+
+1. Allow client workloads to complete, if needed.
+1. Unmount client machines from the cache.
+1. [Stop the cache](hpc-cache-manage.md#stop-the-cache).
+ 1. The cache will synchronize its data with long-term storage systems, which can take some time depending on your cache settings and storage infrastructure.
+ 1. Wait until the cache status changes to **Stopped**.
+
+> [!TIP]
+> If you need to move or copy data to the new region, you can begin that process as soon as the original cache is stopped.
+
+## Move
+
+Follow these basic steps to decommission and re-create the HPC Cache in a different region.
+
+1. If you have not already done so, follow the steps above to [shut down the cache](#shut-down-the-cache).
+1. Update the Azure Resource Manager template from the old cache to include the correct information for the new cache. Check both the parameters file and the template file for updates. Or, if you will use a different deployment script, update the information there.
+1. If needed, move Blob storage containers to the new region, or copy data from your old region to new containers. (You can begin this process any time after stopping the original cache.)
+
+ Refer to [Move an Azure Storage account to another region](../storage/common/storage-account-move.md) for help.
+
+ Keep these tips in mind:
+
+ * If you use [AzCopy](../storage/common/storage-use-azcopy-v10.md), you must use AzCopy V10 or later; earlier versions are unsupported for some types of HPC Cache storage.
+ * If you move an NFS-enabled blob container (ADLS-NFS storage target), be aware of the risk of mixing blob-style writes with NFS writes. Read more about this in [Use NFS-mounted blob storage with Azure HPC Cache](nfs-blob-considerations.md#pre-load-data-with-nfs-protocol).
+
+1. Create a new cache in your target region using a convenient method. Read [Template deployment](../azure-resource-manager/templates/overview.md#template-deployment-process) to learn how to use your saved template. Read [Create an HPC Cache](hpc-cache-create.md) to learn about other methods.
+1. Wait until the cache has been created and appears in your subscription's **Resources** list with a status of **Healthy**.
+1. Follow the documentation instructions to re-create storage targets and configure other cache settings.
+1. When you are ready, mount clients to the new cache using its IP addresses.
+
+## Verify
+
+Use the Azure Portal to inspect the new cache and storage resources in the new region. Verify that all items from the list in [Prerequisites](#prerequisites) have been created.
+
+## Clean up source resources
+
+If you haven't already done so, [delete](hpc-cache-manage.md?#delete-the-cache) the original cache. Also delete its virtual networks and any other resources in the original region that are no longer needed.
+
+If you deployed all of your cache's resources in a unique resource group *and will not use the same resource group in new region*, you can delete the resource group to remove all cache resources from the old region.
hpc-cache Nfs Blob Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/nfs-blob-considerations.md
description: Describes procedures and limitations when using ADLS-NFS blob stora
Previously updated : 07/12/2021- Last updated : 03/02/2022+ # Use NFS-mounted blob storage with Azure HPC Cache
Azure HPC Cache uses NFS-enabled blob storage in its ADLS-NFS storage target typ
This article explains strategies and limitations that you should understand when you use ADLS-NFS storage targets.
-You should also read the NFS blob documentation, especially these sections that describe compatible and incompatible scenarios:
+You should also read the NFS blob documentation, especially these sections that describe compatible and incompatible scenarios, and give troubleshooting tips:
* [Feature overview](../storage/blobs/network-file-system-protocol-support.md) * [Performance considerations](../storage/blobs/network-file-system-protocol-support-performance.md) * [Known issues and limitations](../storage/blobs/network-file-system-protocol-known-issues.md)
+* [How-to procedure and troubleshooting guide](../storage/blobs/network-file-system-protocol-support-how-to.md#resolve-common-errors)
## Understand consistency requirements
iot-central Concepts Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-architecture.md
Build integrations that let other applications and services manage your applicat
## Next steps
-Now that you've learned about the architecture of Azure IoT Central, the suggested next step is to learn about [scalability and high availability](concepts-scalability-availability.md) in Azure IoT Central.
+Now that you've learned about the architecture of Azure IoT Central, the suggested next step is to learn about [device connectivity](overview-iot-central-developer.md) in Azure IoT Central.
iot-central Concepts Faq Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-faq-scalability-availability.md
+
+ Title: Azure IoT Central scalability and high availability | Microsoft Docs
+description: This article describes how IoT Central automatically scales to handle more devices, its high availability disaster recovery capabilities.
++ Last updated : 03/01/2022++++++
+# What does it mean for IoT Central to have high availability, disaster recovery (HADR), and elastic scale?
+
+Azure IoT Central is an application platform as a service (aPaaS) that manages scalability and HADR for you. An IoT Central application can scale to support millions of connected devices. For more information about device and message pricing, see [Azure IoT Central pricing](https://azure.microsoft.com/pricing/details/iot-central/). For more information about the service level agreement, see [SLA for Azure IoT Central](https://azure.microsoft.com/support/legal/sla/iot-central/v1_0/).
+
+This article provides background information about how IoT Central scales and delivers HADR. The article also includes guidance on how to take advantage of these capabilities.
+
+## Scalability
+
+IoT Central applications internally use multiple Azure services such as IoT Hub and the Device Provisioning Service (DPS). Many of these underlying services are multi-tenanted. However, to ensure the full isolation of customer data, IoT Central uses single-tenant IoT hubs.
+
+IoT Central automatically scales its IoT hubs based on the load profiles in your application. IoT Central can scale up individual IoT hubs and scale out the number of IoT hubs in an application. IoT Central also automatically scales other underlying services.
+
+## High availability and disaster recovery
+
+For highly available device connectivity, an IoT Central application always have at least two IoT hubs. For exceptions to to this rule, see [Limitations](#limitations). The number of hubs can grow or shrink as IoT Central scales the application in response to changes in the load profile.
+
+IoT Central also uses [availability zones](../../availability-zones/az-overview.md#availability-zones) to make various services it uses highly available.
+
+An incident that requires disaster recovery could range from a subset of services becoming unavailable to a whole region becoming unavailable. IoT Central follows different recovery processes depending on the nature and scale of the incident. For example, if an entire Azure region becomes unavailable in the wake of a catastrophic failure, disaster recovery procedures failover applications to another region in the same geography.
+
+## Work with multiple IoT hubs
+
+As a consequence of automatic scaling and HADR support, the IoT hub instances in your application can change. For example:
+
+- The number of hubs could increase or decrease as the application scales.
+- A hub could fail and become unavailable.
+- The disaster recovery procedures could add new hubs in a different region to replace the hubs in a failed region.
+
+Although IoT Central manages the IoT hubs in your application for you, a device must be able to re-establish a connection if the hub it connects to is unavailable:
+
+### Device provisioning
+
+As the number of IoT hubs in your application changes, a device might need to connect to a different hub.
+
+Before a device connects to IoT Central, it must be registered and provisioned in the underlying services. When you add a device to an IoT Central application, IoT Central adds an entry to a DPS enrollment group. Information from the enrollment group such as the ID scope, device ID, and keys is surfaced in the IoT Central UI.
+
+When a device first connects to your IoT Central application, DPS provisions the device in one of the enrollments group's linked IoT hubs. The device is then associated with that IoT hub. DPS uses an allocation policy to load balance the provisioning across the IoT hubs in the application. This process makes sure each IoT hub has a similar number of provisioned devices.
+
+To learn more about registration and provisioning in IoT Central, see [Get connected to Azure IoT Central](concepts-get-connected.md).
+
+### Device connections
+
+After DPS provisions a device to an IoT hub, the device always tries to connect to that hub. If a device can't reach the IoT hub it's provisioned to, it can't connect to your IoT Central application. To handle this scenario, your device firmware should include a retry strategy that reprovisions the device to another hub.
+
+To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices](overview-iot-central-developer.md#best-practices).
+
+To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](overview-iot-central-developer.md#test-failover-capabilities).
+
+## Data export
+
+IoT Central applications often use other, user configured services. For example, you can configure your IoT Central application to continuously export data to services such as Azure Event Hubs and Azure Blob Storage.
+
+If a configured data export can't write to its destination, IoT Central tries to retransmit the data for up to 15 minutes, after which IoT Central marks the destination as failed. Failed destinations are periodically checked to verify if they're writable.
+
+You can force IoT Central to restart the failed exports by disabling and re-enabling the data export.
+
+Review the high availability and scalability best practices for the data export destination service you're using:
+
+- Azure Blob Storage: [Azure Storage redundancy](../../storage/common/storage-redundancy.md) and [Performance and scalability checklist for Blob storage](../../storage/blobs/storage-performance-checklist.md)
+- Azure Event Hubs: [Availability and consistency in Event Hubs](../../event-hubs/event-hubs-availability-and-consistency.md) and [Scaling with Event Hubs](../../event-hubs/event-hubs-scalability.md)
+- Azure Service Bus: [Best practices for insulating applications against Service Bus outages and disasters](../../service-bus-messaging/service-bus-outages-disasters.md) and [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)
+
+## Limitations
+
+Currently, there are a few legacy IoT Central applications created before April 2021 that haven't yet migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command to check if your application still uses a single IoT hub.
+
+Currently, IoT Edge devices can't move between IoT hubs.
+
+## Next steps
+
+Now that you've learned about the scalability and high availability of Azure IoT Central, the suggested next step is to learn about [Quotas and limits](concepts-quotas-limits.md) in Azure IoT Central.
iot-central Concepts Get Connected https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-get-connected.md
All data exchanged between devices and your Azure IoT Central is encrypted. IoT
Some suggested next steps are to: -- Review [best practices](concepts-best-practices.md) for developing devices.
+- Review [best practices](overview-iot-central-developer.md#best-practices) for developing devices.
- Review some sample code that shows how to use SAS tokens in [Tutorial: Create and connect a client application to your Azure IoT Central application](tutorial-connect-device.md) - Learn how to [How to connect devices with X.509 certificates using Node.js device SDK for IoT Central Application](how-to-connect-devices-x509.md) - Learn how to [Monitor device connectivity using Azure CLI](./howto-monitor-devices-azure-cli.md)
iot-central Concepts Scalability Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/concepts-scalability-availability.md
- Title: Azure IoT Central scalability and high availability | Microsoft Docs
-description: This article describes how IoT Central automatically scales to handle more devices and its high availability.
-- Previously updated : 10/14/2021------
-# Scalability and high availability
-
-IoT Central applications internally use multiple Azure services such as IoT Hub and the Device Provisioning Service (DPS). Many of these underlying services are multi-tenanted. However, to ensure the full isolation of customer data, IoT Central uses single-tenant IoT hubs. IoT Central automatically manages multiple instances its underlying services to scale your IoT Central applications and make them highly available.
-
-IoT Central automatically scales its IoT hubs based on the load profiles in your application. IoT Central can scale up individual IoT hubs and scale out the number of IoT hubs. For highly available device connectivity, every IoT Central always has at least two IoT hubs. Although IoT Central manages its IoT hubs for you, having multiple IoT hubs impacts on the implementation of your device firmware.
-
-The IoT hubs in an IoT Central application are all located in the same Azure region. That's why the multiple IoT hub architecture provides highly available device connectivity if there's an isolated outage. If an entire Azure region becomes unavailable, disaster recovery procedures failover entire IoT Central applications to another region.
-
-## Device provisioning
-
-Before a device connects to IoT Central, it must be registered and provisioned in the underlying services. When you add a device to an IoT Central application, IoT Central adds an entry to a DPS enrollment group. Information from the enrollment group such as the ID scope, device ID, and keys is surfaced in the IoT Central UI.
-
-When a device first connects to your IoT Central application, DPS provisions the device in one of the enrollments group's linked IoT hubs. DPS uses an allocation policy to load balance the provisioning across the IoT hubs in the application. This process makes sure each IoT hub has a similar number of provisioned devices.
-
-To learn more about registration and provisioning in IoT Central, see [Get connected to Azure IoT Central](concepts-get-connected.md).
-
-## Device connections
-
-After DPS provisions a device to an IoT hub, the device always tries to connect to that hub. If a device can't reach the IoT hub it's provisioned to, it can't connect to your IoT Central application. To handle this scenario, your device firmware should include a retry strategy.
-
-To learn more about how device firmware should handle connection errors and connect to a different hub, see [Best practices for device development](concepts-best-practices.md).
-
-To learn more about how to verify your device firmware can handle connection failures, see [Test failover capabilities](concepts-best-practices.md#test-failover-capabilities).
-
-## Data export
-
-IoT Central applications often use other, user configured services. For example, you can configure your IoT Central application to continuously export data to services such as Azure Event Hubs and Azure Blob Storage.
-
-If a configured data export can't write to its destination, IoT Central tries to retransmit the data for up to 15 minutes, after which IoT Central marks the destination as failed. Failed destinations are periodically checked to verify if they are writable.
-
-You can force IoT Central to restart the failed exports by disabling and re-enabling the data export.
-
-Review the high availability and scalability best practices for the data export destination service you're using:
--- Azure Blob Storage: [Azure Storage redundancy](../../storage/common/storage-redundancy.md) and [Performance and scalability checklist for Blob storage](../../storage/blobs/storage-performance-checklist.md)-- Azure Event Hubs: [Availability and consistency in Event Hubs](../../event-hubs/event-hubs-availability-and-consistency.md) and [Scaling with Event Hubs](../../event-hubs/event-hubs-scalability.md)-- Azure Service Bus: [Best practices for insulating applications against Service Bus outages and disasters](../../service-bus-messaging/service-bus-outages-disasters.md) and [Automatically update messaging units of an Azure Service Bus namespace](../../service-bus-messaging/automate-update-messaging-units.md)-
-## Limitations
-
-Currently, there are a few legacy IoT Central applications that were created before April 2021 that haven't yet been migrated to the multiple IoT hub architecture. Use the `az iot central device manual-failover` command to check if your application still uses a single IoT hub.
-
-Currently, IoT Edge devices can't move between IoT hubs.
-
-## Next steps
-
-Now that you've learned about the scalability and high availability of Azure IoT Central, the suggested next step is to learn about [Quotas and limits](concepts-quotas-limits.md) in Azure IoT Central.
iot-central Overview Iot Central Developer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-developer.md
Using DPS enables:
- You to use your own device IDs to register devices in IoT Central. Using your own device IDs simplifies integration with existing back-office systems. - A single, consistent way to connect devices to IoT Central.
-To learn more, see [Get connected to Azure IoT Central](./concepts-get-connected.md) and [Best practices](concepts-best-practices.md).
+To learn more, see [Get connected to Azure IoT Central](./concepts-get-connected.md) and [best practices](#best-practices).
### Security
iot-central Overview Iot Central Operator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/overview-iot-central-operator.md
IoT Central lets you complete device management tasks such as:
- Troubleshoot and remediate issues with devices. - Provision new devices.
+## Search your devices
+
+IoT Central lets you search devices by device name, ID, property value or cloud property value.
++ ## Monitor and manage devices :::image type="content" source="media/overview-iot-central-operator/simulated-telemetry.png" alt-text="Screenshot that shows a device view":::
iot-hub Iot Hub Migrate To Diagnostics Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-migrate-to-diagnostics-settings.md
Customers using [operations monitoring](iot-hub-operations-monitoring.md) to track the status of operations in IoT Hub can migrate that workflow to [Azure Monitor resource logs](../azure-monitor/essentials/platform-logs-overview.md), a feature of Azure Monitor. Resource logs supply resource-level diagnostic information for many Azure services.
-**The operations monitoring functionality of IoT Hub is deprecated**, and has been removed from the portal. This article provides steps to move your workloads from operations monitoring to Azure Monitor resource logs. For more information about the deprecation timeline, see [Monitor your Azure IoT solutions with Azure Monitor and Azure Resource Health](https://azure.microsoft.com/blog/monitor-your-azure-iot-solutions-with-azure-monitor-and-azure-resource-health/).
+>[!IMPORTANT]
+>**IoT Hub operations monitoring is retired and was removed from IoT Hub on March 10, 2019.** Accordingly, this article is no longer being updated. IoT Hub operations monitoring was replaced by Azure Monitor. To learn about monitoring the operations and health of IoT Hub with Azure Monitor, see [Monitor IoT Hub](monitor-iot-hub.md).
+
+This article provides steps to move your workloads from operations monitoring to Azure Monitor resource logs.
## Update IoT Hub
iot-hub Iot Hub Operations Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-operations-monitoring.md
-# IoT Hub operations monitoring (deprecated)
+# IoT Hub operations monitoring (retired)
IoT Hub operations monitoring enables you to monitor the status of operations on your IoT hub in real time. IoT Hub tracks events across several categories of operations. You can opt into sending events from one or more categories to an endpoint of your IoT hub for processing. You can monitor the data for errors or set up more complex processing based on data patterns.
->[!NOTE]
->IoT Hub **operations monitoring is deprecated and has been removed from IoT Hub on March 10, 2019**. For monitoring the operations and health of IoT Hub, see [Monitor IoT Hub](monitor-iot-hub.md). For more information about the deprecation timeline, see [Monitor your Azure IoT solutions with Azure Monitor and Azure Resource Health](https://azure.microsoft.com/blog/monitor-your-azure-iot-solutions-with-azure-monitor-and-azure-resource-health).
+>[!IMPORTANT]
+>**IoT Hub operations monitoring is retired and was removed from IoT Hub on March 10, 2019.** Accordingly, this article is no longer being updated. IoT Hub operations monitoring was replaced by Azure Monitor. To learn about monitoring the operations and health of IoT Hub with Azure Monitor, see [Monitor IoT Hub](monitor-iot-hub.md).
IoT Hub monitors six categories of events:
load-balancer Load Balancer Troubleshoot Backend Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-troubleshoot-backend-traffic.md
na Previously updated : 11/24/2021 Last updated : 03/02/2022
This page provides troubleshooting information for Azure Load Balancer questions.
-## VMs behind Load Balancer are receving uneven distribution of traffic
+## Virtual machines behind a load balancer are receiving uneven distribution of traffic
+ If you suspect backend pool members are receiving traffic, it could be due to the following causes. Azure Load Balancer distributes traffic based on connections. Be sure to check traffic distribution per connection and not per packet. Verify using the **Flow Distribution** tab in your pre-configured [Load Balancer Insights dashboard](load-balancer-insights.md#flow-distribution).
-Note that Azure Load Balancer doesn't support true round robin load balancing but supports a hash based [distribution mode](distribution-mode-concepts.md).
+Azure Load Balancer doesn't support true round robin load balancing but supports a hash based [distribution mode](distribution-mode-concepts.md).
## Cause 1: You have session persistence configured
-Using source persistence distribution mode can cause an uneven distribution of traffic.
-If this is not desired, update session persistence to be **None** so traffic is distributed across all healthy instances in the backend pool. Learn more about [distribution modes for Azure Load Balancer](distribution-mode-concepts.md).
+Using source persistence distribution mode can cause an uneven distribution of traffic. If this distribution isn't desired, update session persistence to be **None** so traffic is distributed across all healthy instances in the backend pool. Learn more about [distribution modes for Azure Load Balancer](distribution-mode-concepts.md).
## Cause 2: You have a proxy configured
-Clients that run behind proxies might be seen as one unique client application from the Load Balancer's point of view.
+Clients that run behind proxies might be seen as one unique client application from the load balancer's point of view.
+
+## VMs behind a load balancer aren't responding to traffic on the configured data port
+
+If a backend pool VM is listed as healthy and responds to the health probes, but is still not participating in the load balancing, or isn't responding to the data traffic, it may be due to any of the following reasons:
+
+* A load balancer backend pool VM isn't listening on the data port
-## VMs behind Load Balancer are not responding to traffic on the configured data port
+* Network security group is blocking the port on the load balancer backend pool VM 
-If a backend pool VM is listed as healthy and responds to the health probes, but is still not participating in the Load Balancing, or is not responding to the data traffic, it may be due to any of the following reasons:
-* Load Balancer Backend pool VM is not listening on the data port
-* Network security group is blocking the port on the Load Balancer backend pool VM 
-* Accessing the Load Balancer from the same VM and NIC
-* Accessing the Internet Load Balancer frontend from the participating Load Balancer backend pool VM
+* Accessing the load balancer from the same VM and NIC
-## Cause 1: Load Balancer backend pool VM is not listening on the data port
+* Accessing the Internet load balancer frontend from the participating load balancer backend pool VM
-If a VM does not respond to the data traffic, it may be because either the target port is not open on the participating VM, or, the VM is not listening on that port.
+## Cause 1: A load balancer backend pool VM isn't listening on the data port
+
+If a VM doesn't respond to the data traffic, it may be because either the target port isn't open on the participating VM, or, the VM isn't listening on that port.
**Validation and resolution**
-1. Log in to the backend VM.
-2. Open a command prompt and run the following command to validate there is an application listening on the data port: 
- netstat -an
-3. If the port is not listed with State "LISTENING", configure the proper listener port
-4. If the port is marked as Listening, then check the target application on that port for any possible issues.
+1. Sign in to the backend VM.
+
+2. Open a command prompt and run the following command to validate there's an application listening on the data port: 
+
+ ```cmd
+ netstat -an
+ ```
+
+3. If the port isn't listed with state **LISTENING**, configure the proper listener port
+
+4. If the port is marked as **LISTENING**, then check the target application on that port for any possible issues.
-## Cause 2: Network security group is blocking the port on the Load Balancer backend pool VM 
+## Cause 2: A network security group is blocking the port on the load balancer backend pool VM 
If one or more network security groups configured on the subnet or on the VM, is blocking the source IP or port, then the VM is unable to respond. For the public load balancer, the IP address of the Internet clients will be used for communication between the clients and the load balancer backend VMs. Make sure the IP address of the clients are allowed in the backend VM's network security group. 1. List the network security groups configured on the backend VM. For more information, see [Manage network security groups](../virtual-network/manage-network-security-group.md)
-1. From the list of network security groups, check if:
- - the incoming or outgoing traffic on the data port has interference.
- - a **Deny All** network security group rule on the NIC of the VM or the subnet that has a higher priority that the default rule that allows Load Balancer probes and traffic (network security groups must allow Load Balancer IP of 168.63.129.16, that is probe port)
-1. If any of the rules are blocking the traffic, remove and reconfigure those rules to allow the data traffic. 
-1. Test if the VM has now started to respond to the health probes.
-## Cause 3: Accessing the Load Balancer from the same VM and Network interface
+2. From the list of network security groups, check if:
+
+ - The incoming or outgoing traffic on the data port has interference.
+
+ - A **Deny All** network security group rule on the NIC of the VM or the subnet that has a higher priority that the default rule that allows the load balancer probes and traffic (network security groups must allow load balancer IP of 168.63.129.16, that is probe port)
-If your application hosted in the backend VM of a Load Balancer is trying to access another application hosted in the same backend VM over the same Network Interface, it is an unsupported scenario and will fail.
+3. If any of the rules are blocking the traffic, remove and reconfigure those rules to allow the data traffic. 
+
+4. Test if the VM has now started to respond to the health probes.
+
+## Cause 3: Access of the load balancer from the same VM and network interface
+
+If your application hosted in the backend VM of a load balancer is trying to access another application hosted in the same backend VM over the same network interface, it's an unsupported scenario and will fail.
**Resolution**+ You can resolve this issue via one of the following methods:+ * Configure separate backend pool VMs per application.
-* Configure the application in dual NIC VMs so each application was using its own Network interface and IP address.
-## Cause 4: Accessing the internal Load Balancer frontend from the participating Load Balancer backend pool VM
+* Configure the application in dual NIC VMs so each application was using its own network interface and IP address.
+
+## Cause 4: Access of the internal load balancer frontend from the participating load balancer backend pool VM
-If an internal Load Balancer is configured inside a VNet, and one of the participant backend VMs is trying to access the internal Load Balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario is not supported.
+If an internal load balancer is configured inside a virtual network, and one of the participant backend VMs is trying to access the internal load balancer frontend, failures can occur when the flow is mapped to the originating VM. This scenario isn't supported.
**Resolution**
-There are several ways to unblock this scenario, including using a proxy. Evaluate Application Gateway or other 3rd party proxies (for example, nginx or haproxy). For more information about Application Gateway, see [Overview of Application Gateway](../application-gateway/overview.md)
+
+There are several ways to unblock this scenario, including using a proxy. Evaluate Application Gateway or other third party proxies (for example, nginx or haproxy). For more information about Application Gateway, see [Overview of Application Gateway](../application-gateway/overview.md)
**Details**
-Internal Load Balancers don't translate outbound originated connections to the front end of an internal Load Balancer because both are in private IP address space. Public Load Balancers provide [outbound connections](load-balancer-outbound-connections.md) from private IP addresses inside the virtual network to public IP addresses. For internal Load Balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn't required.
-A side effect is that if an outbound flow from a VM in the back-end pool attempts a flow to front end of the internal Load Balancer in its pool _and_ is mapped back to itself, the two legs of the flow don't match. Because they don't match, the flow fails. The flow succeeds if the flow didn't map back to the same VM in the back-end pool that created the flow to the front end.
+Internal load balancers don't translate outbound originated connections to the front end of an internal load balancer because both are in private IP address space. Public load balancers provide [outbound connections](load-balancer-outbound-connections.md) from private IP addresses inside the virtual network to public IP addresses. For internal load balancers, this approach avoids potential SNAT port exhaustion inside a unique internal IP address space, where translation isn't required.
+
+A side effect is that if an outbound flow from a VM in the back-end pool attempts a flow to front end of the internal load balancer in its pool _and_ is mapped back to itself, the two legs of the flow don't match. Because they don't match, the flow fails. The flow succeeds if the flow didn't map back to the same VM in the back-end pool that created the flow to the front end.
When the flow maps back to itself, the outbound flow appears to originate from the VM to the front end and the corresponding inbound flow appears to originate from the VM to itself. From the guest operating system's point of view, the inbound and outbound parts of the same flow don't match inside the virtual machine. The TCP stack won't recognize these halves of the same flow as being part of the same flow. The source and destination don't match. When the flow maps to any other VM in the back-end pool, the halves of the flow do match and the VM can respond to the flow.
-The symptom for this scenario is intermittent connection timeouts when the flow returns to the same backend that originated the flow. Common workarounds include insertion of a proxy layer behind the internal Load Balancer and using Direct Server Return (DSR) style rules. For more information, see [Multiple Frontends for Azure Load Balancer](load-balancer-multivip-overview.md).
+The symptom for this scenario is intermittent connection timeouts when the flow returns to the same backend that originated the flow. Common workarounds include insertion of a proxy layer behind the internal load balancer and using Direct Server Return (DSR) style rules. For more information, see [Multiple frontends for Azure Load Balancer](load-balancer-multivip-overview.md).
-You can combine an internal Load Balancer with any third-party proxy or use internal [Application Gateway](../application-gateway/overview.md) for proxy scenarios with HTTP/HTTPS. While you could use a public Load Balancer to mitigate this issue, the resulting scenario is prone to [SNAT exhaustion](load-balancer-outbound-connections.md). Avoid this second approach unless carefully managed.
+You can combine an internal load balancer with any third-party proxy or use internal [Application Gateway](../application-gateway/overview.md) for proxy scenarios with HTTP/HTTPS. While you could use a public load balancer to mitigate this issue, the resulting scenario is prone to [SNAT exhaustion](load-balancer-outbound-connections.md). Avoid this second approach unless carefully managed.
## Next steps
-If the preceding steps do not resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
+If the preceding steps don't resolve the issue, open a [support ticket](https://azure.microsoft.com/support/options/).
load-balancer Manage Probes How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-probes-how-to.md
+
+ Title: Manage health probes for Azure Load Balancer - Azure portal
+description: In this article, learn how to manage health probes for Azure Load Balancer using the Azure portal
++++ Last updated : 03/02/2022+++
+# Manage health probes for Azure Load Balancer using the Azure portal
+
+Azure Load Balancer supports health probes to monitor the health of backend instances. In this article, you'll learn how to manage health probes for Azure Load Balancer.
+
+There are three types of health probes:
+
+| | Standard SKU | Basic SKU |
+| | | |
+| **Probe types** | TCP, HTTP, HTTPS | TCP, HTTP |
+| **Probe down behavior** | All probes down, all TCP flows continue. | All probes down, all TCP flows expire. |
+
+>[!IMPORTANT]
+>Load Balancer health probes originate from the IP address 168.63.129.16 and must not be blocked for probes to mark your instance as up. To see this probe traffic within your backend instance, review [the Azure Load Balancer FAQ](./load-balancer-faqs.yml).
+>
+>
+>Regardless of configured time-out threshold, HTTP(S) load balancer health probes will automatically mark the instance as down if the server returns any status code that isn't HTTP 200 OK or if the connection is terminated via TCP reset.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**.
+
+## TCP health probe
+
+In this section, you'll learn how to add and remove a TCP health probe. A public load balancer is used in the examples.
+
+### Add a TCP health probe
+
+In this example, you'll create a TCP health probe to monitor port 80.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select **+ Add** in **Health probes** to add a probe.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-probe.png" alt-text="Screenshot of the health probes page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add health probe**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHealthProbe**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 80**. |
+ | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
+ | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-tcp-probe.png" alt-text="Screenshot of TCP probe addition.":::
+
+### Remove a TCP health probe
+
+In this example, you'll remove a TCP health probe.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-probes-how-to/remove-tcp-probe.png" alt-text="Screenshot of TCP probe removal.":::
+
+## HTTP health probe
+
+In this section, you'll learn how to add and remove an HTTP health probe. A public load balancer is used in the examples.
+
+### Add an HTTP health probe
+
+In this example, you'll create an HTTP health probe.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select **+ Add** in **Health probes** to add a probe.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-probe.png" alt-text="Screenshot of the health probes page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add health probe**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHealthProbe**. |
+ | Protocol | Select **HTTP**. |
+ | Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 80**. |
+ | Path | Enter a URI used for requesting health status. For this example, it's **/**. |
+ | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
+ | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-http-probe.png" alt-text="Screenshot of HTTP probe addition.":::
+
+### Remove an HTTP health probe
+
+In this example, you'll remove an HTTP health probe.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-probes-how-to/remove-http-probe.png" alt-text="Screenshot of HTTP probe removal.":::
+
+## HTTPS health probe
+
+In this section, you'll learn how to add and remove an HTTPS health probe. A public load balancer is used in the examples.
+
+### Add an HTTPS health probe
+
+In this example, you'll create an HTTPS health probe.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select **+ Add** in **Health probes** to add a probe.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-probe.png" alt-text="Screenshot of the health probes page for Azure Load Balancer":::
+
+6. Enter or select the following information in **Add health probe**.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHealthProbe**. |
+ | Protocol | Select **HTTPS**. |
+ | Port | Enter the **TCP** port you wish to monitor. For this example, it's **port 443**. |
+ | Path | Enter a URI used for requesting health status. For this example, it's **/**. |
+ | Interval | Enter an interval between probe checks. For this example, it's the default of **5**. |
+ | Unhealthy threshold | Enter the threshold number for consecutive failures. For this example, it's the default of **2**. |
+
+7. Select **Add**.
+
+ :::image type="content" source="./media/manage-probes-how-to/add-https-probe.png" alt-text="Screenshot of HTTPS probe addition.":::
+
+### Remove an HTTPS health probe
+
+In this example, you'll remove an HTTPS health probe.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+3. Select **myLoadBalancer** or your load balancer.
+
+4. In the load balancer page, select **Health probes** in **Settings**.
+
+5. Select the three dots next to the rule you want to remove.
+
+6. Select **Delete**.
+
+ :::image type="content" source="./media/manage-probes-how-to/remove-https-probe.png" alt-text="Screenshot of HTTPS probe removal.":::
+
+## Next steps
+
+In this article, you learned how to manage health probes for an Azure Load Balancer.
+
+For more information about Azure Load Balancer, see:
+- [What is Azure Load Balancer?](load-balancer-overview.md)
+- [Frequently asked questions - Azure Load Balancer](load-balancer-faqs.yml)
+- [Azure Load Balancer health probes](load-balancer-custom-probe-overview.md)
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
ms.suite: integration Previously updated : 01/28/2022 Last updated : 03/02/2022 +
+#Customer intent: As a developer, I want to create an automated integration workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
# Create an integration workflow with single-tenant Azure Logic Apps (Standard) in the Azure portal
As you progress, you'll complete these high-level tasks:
<a name="create-logic-app-resource"></a>
-## Create the logic app resource
+## Create a Standard logic app resource
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
As you progress, you'll complete these high-level tasks:
1. On the **Logic apps** page, select **Add**.
-1. On the **Create Logic App** page, on the **Basics** tab, provide the following information about your logic app resource:
+1. On the **Create Logic App** page, on the **Basics** tab, provide the following basic information about your logic app:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Fabrikam-Workflows-RG**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. |
+ |||||
+
+1. Before you continue making selections, under **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+
+ | Plan type | Description |
+ |--|-|
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
+ | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ |||
+
+1. Now continue making the following selections:
| Property | Required | Value | Description | |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | The Azure subscription to use for your logic app. |
- | **Resource Group** | Yes | <*Azure-resource-group-name*> | The Azure resource group where you create your logic app and related resources. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a resource group named `Fabrikam-Workflows-RG`. |
- | **Type** | Yes | **Standard** | This logic app resource type runs in the single-tenant Azure Logic Apps environment and uses the [Standard usage, billing, and pricing model](logic-apps-pricing.md#standard-pricing). |
- | **Logic App name** | Yes | <*logic-app-name*> | The name to use for your logic app. This resource name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <p><p>This example creates a logic app named `Fabrikam-Workflows`. <p><p>**Note**: Your logic app's name automatically gets the suffix, `.azurewebsites.net`, because the **Logic App (Standard)** resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. |
- | **Publish** | Yes | <*deployment-environment*> | The deployment destination for your logic app. By default, **Workflow** is selected for deployment to single-tenant Azure Logic Apps. Azure creates an empty logic app resource where you have to add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Preview)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
- | **Region** | Yes | <*Azure-region*> | The location to use for creating your resource group and resources. This example deploys the sample logic app to Azure and uses **West US**. <p>- If you selected **Docker Container**, select your custom location. <p>- To deploy to an [ASEv3](../app-service/environment/overview.md) resource, which must first exist, select that environment resource from the **Region** list. |
+ | **Publish** | Yes | **Workflow** | This option appears and applies only when **Plan type** is set to the **Standard** logic app type. By default, this option is set to **Workflow** and creates an empty logic app resource where you add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region to use for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>- If you previously chose **Docker Container**, select your custom location from the **Region** list. <br><br>- If you want to deploy your app to an existing [App Service Environment v3 resource](../app-service/environment/overview.md), you can select that environment from the **Region** list. |
|||||
- The following example shows the **Create Logic App (Standard)** page:
+ When you're done, your settings look similar to this version:
![Screenshot that shows the Azure portal and "Create Logic App" page.](./media/create-single-tenant-workflows-azure-portal/create-logic-app-resource-portal.png)
logic-apps Monitor Logic Apps Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-logic-apps-log-analytics.md
ms.suite: integration Previously updated : 09/24/2020 Last updated : 03/03/2022 # Set up Azure Monitor logs and collect diagnostics data for Azure Logic Apps
This article shows how to enable Log Analytics on new logic apps and existing lo
## Prerequisites
-Before you start, you need a [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). If you don't have a workspace, learn [how to create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
+* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+* A [Log Analytics workspace](../azure-monitor/essentials/resource-logs.md#send-to-log-analytics-workspace). If you don't have a workspace, learn [how to create a Log Analytics workspace](../azure-monitor/logs/quick-create-workspace.md).
<a name="logging-for-new-logic-apps"></a>
Before you start, you need a [Log Analytics workspace](../azure-monitor/essentia
You can turn on Log Analytics when you create your logic app.
-1. In the [Azure portal](https://portal.azure.com), on the pane where you provide the information to create your logic app, follow these steps:
+1. In the [Azure portal](https://portal.azure.com), on the **Create Logic App** pane where you provide the information to create your Consumption plan-based logic app, follow these steps:
- 1. Under **Log Analytics**, select **On**.
+ 1. Under **Enable log analytics**, select **Yes**.
1. From the **Log Analytics workspace** list, select the workspace where you want to send the data from your logic app runs. ![Provide logic app information](./media/monitor-logic-apps-log-analytics/create-logic-app-details.png)
- After you finish this step, Azure creates your logic app, which is now associated with your Log Analytics workspace. Also, this step automatically installs the Logic Apps Management solution in your workspace.
-
-1. When you're done, select **Create**.
+1. Finish creating your logic app. When you're done, your logic app is associated with your Log Analytics workspace. This step also automatically installs the Logic Apps Management solution in your workspace.
1. After you run your logic app, to view your logic app runs, [continue with these steps](#view-logic-app-runs).
You can turn on Log Analytics when you create your logic app.
If you turned on Log Analytics when you created your logic app, skip this step. You already have the Logic Apps Management solution installed in your Log Analytics workspace.
-1. In the [Azure portal](https://portal.azure.com)'s search box, enter `log analytics workspaces`, and then select **Log Analytics workspaces**.
+1. In the [Azure portal](https://portal.azure.com)'s search box, enter **log analytics workspaces**. Select **Log Analytics workspaces**.
![Select "Log Analytics workspaces"](./media/monitor-logic-apps-log-analytics/find-select-log-analytics-workspaces.png)
If you turned on Log Analytics when you created your logic app, skip this step.
![On overview pane, add new solution](./media/monitor-logic-apps-log-analytics/add-logic-apps-management-solution.png)
-1. After the **Marketplace** opens, in the search box, enter `logic apps management`, and select **Logic Apps Management**.
+1. After the **Marketplace** opens, in the search box, enter **logic apps management**. Select **Logic Apps Management**.
![From Marketplace, select "Logic Apps Management"](./media/monitor-logic-apps-log-analytics/select-logic-apps-management.png)
-1. On the solution description pane, select **Create**.
+1. On the **Logic Apps Management** tile, from the **Create** list, select **Logic Apps Management**.
![Select "Create" to add "Logic Apps Management" solution](./media/monitor-logic-apps-log-analytics/create-logic-apps-management-solution.png)
-1. Review and confirm the Log Analytics workspace where you want to install the solution, and select **Create** again.
+1. On the **Create Logic Apps Management (Preview) Solution** pane, select the Log Analytics workspace where you want to install the solution. Select **Review + create**, review your information, and select **Create**.
![Select "Create" for "Logic Apps Management"](./media/monitor-logic-apps-log-analytics/confirm-log-analytics-workspace.png)
- After Azure deploys the solution to the Azure resource group that contains your Log Analytics workspace, the solution appears on your workspace's summary pane.
+ After Azure deploys the solution to the Azure resource group that contains your Log Analytics workspace, the solution appears on your workspace summary pane under **Overview**.
- ![Workspace summary pane](./media/monitor-logic-apps-log-analytics/workspace-summary-pane-logic-apps-management.png)
+ ![Screenshot showing workspace summary pane with Logic Apps Management solution.](./media/monitor-logic-apps-log-analytics/workspace-summary-pane-logic-apps-management.png)
<a name="set-up-resource-logs"></a>
When you store information about runtime events and data in [Azure Monitor logs]
1. To create the setting, follow these steps:
- 1. Provide a name for the setting.
+ 1. For **Diagnostic setting name**, provide a name for the setting.
- 1. Select **Send to Log Analytics**.
+ 1. Under **Destination details**, select **Send to Log Analytics workspace**.
1. For **Subscription**, select the Azure subscription that's associated with your Log Analytics workspace.
- 1. For **Log Analytics Workspace**, select the workspace that you want to use.
+ 1. For **Log Analytics workspace**, select your workspace.
- 1. Under **log**, select the **WorkflowRuntime** category, which specifies the event category that you want to record.
+ 1. Under **Logs** > **Categories**, select **WorkflowRuntime**, which specifies the event category that you want to record.
- 1. To select all metrics, under **metric**, select **AllMetrics**.
+ 1. Under **Metrics**, select **AllMetrics**.
1. When you're done, select **Save**.
- For example:
+ When you're done, your version looks similar to the following example:
![Select Log Analytics workspace and data for logging](./media/monitor-logic-apps-log-analytics/send-diagnostics-data-log-analytics-workspace.png)
After your logic app runs, you can view the data about those runs in your Log An
1. In the [Azure portal](https://portal.azure.com), find and open your Log Analytics workspace.
-1. On your workspace's menu, select **Workspace summary** > **Logic Apps Management**.
-
- ![Logic app run status and count](./media/monitor-logic-apps-log-analytics/logic-app-runs-summary.png)
+1. On your workspace menu, under **General**, select **Workspace summary** > **Logic Apps Management**.
> [!NOTE] > If the Logic Apps Management tile doesn't immediately show results after a run, > try selecting **Refresh** or wait for a short time before trying again.
+ ![Logic app run status and count](./media/monitor-logic-apps-log-analytics/logic-app-runs-summary.png)
+ Here, your logic app runs are grouped by name or by execution status. This page also shows details about failures in actions or triggers for the logic app runs. ![Status summary for your logic app runs](./media/monitor-logic-apps-log-analytics/logic-app-runs-summary-details.png)
logic-apps Quickstart Create First Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-first-logic-app-workflow.md
Last updated 03/02/2022
-#Customer intent: As a developer, I want to create my first automated integration workflow using Azure Logic Apps in the Azure portal.
+#Customer intent: As a developer, I want to create my first automated integration workflow that runs in Azure Logic Apps using the Azure portal.
# Quickstart: Create an integration workflow with multi-tenant Azure Logic Apps and the Azure portal
-This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account, and that runs in [*multi-tenant* Azure Logic Apps](logic-apps-overview.md). While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments. For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+This quickstart shows how to create an example automated workflow that integrates two services, an RSS feed for a website and an email account. More specifically, you create a [Consumption plan-based](logic-apps-pricing.md#consumption-pricing) logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. This resource runs in [*multi-tenant* Azure Logic Apps](logic-apps-overview.md).
-In this example, you create a logic app resource and workflow that uses the RSS connector and the Office 365 Outlook connector. The resource runs in multi-tenant Azure Logic Apps and is based on the [Consumption pricing model](logic-apps-pricing.md#consumption-pricing). The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow.
+> [!NOTE]
+> To create a workflow in a Standard logic app resource that runs in *single-tenant* Azure Logic Apps, review
+> [Create an integration workflow with single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md).
+> For more information about multi-tenant versus single-tenant Azure Logic Apps, review
+> [Single-tenant versus multi-tenant and integration service environment](single-tenant-overview-compare.md).
+
+The RSS connector has a trigger that checks an RSS feed, based on a schedule. The Office 365 Outlook connector has an action that sends an email for each new item. The connectors in this example are only two among the [hundreds of connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on premises, and hybrid environments.
The following screenshot shows the high-level example workflow:
The following screenshot shows the high-level example workflow:
As you progress through this quickstart, you'll learn these basic steps:
-* Create a Consumption logic app resource that runs in the multi-tenant Azure Logic Apps environment.
+* Create a Consumption logic app resource that runs in multi-tenant Azure Logic Apps.
* Select the blank logic app template. * Add a trigger that specifies when to run the workflow. * Add an action that performs a task after the trigger fires.
To create and manage a logic app resource using other tools, review these other
<a name="create-logic-app-resource"></a>
-## Create a logic app resource
+## Create a Consumption logic app resource
1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account.
To create and manage a logic app resource using other tools, review these other
![Screenshot showing the Azure portal and Logic Apps service page and "Add" option selected.](./media/quickstart-create-first-logic-app-workflow/add-new-logic-app.png)
-1. On the **Create Logic App** pane, select the Azure subscription to use, create a new [resource group](../azure-resource-manager/management/overview.md#terminology) for your logic app resource, and provide basic details about your logic app resource.
-
- | Property | Value | Description |
- |-|-|-|
- | **Subscription** | <*Azure-subscription-name*> | The name of your Azure subscription. |
- | **Resource Group** | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) name, which must be unique across regions. This example uses "My-First-LA-RG". |
- | **Type** | **Consumption** | The logic app resource type and billing model to use for your resource: <p><p>- **Consumption**: This logic app resource type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). <p>- **Standard**: This logic app resource type runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). <br><br>To continue following this quickstart, make sure that you select the **Consumption** option. |
- | **Logic App name** | <*logic-app-name*> | Your logic app resource name, which must be unique across regions. This example uses **My-First-Logic-App**. <p><p>**Important**: This name can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
- | **Publish** | **Workflow** | Available only when you select the [**Standard** logic app resource type](create-single-tenant-workflows-azure-portal.md). By default, **Workflow** is selected for deployment to [single-tenant Azure Logic Apps](single-tenant-overview-compare.md) and creates an empty logic app resource where you add your first workflow. <p><p>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
- | **Region** | <*Azure-region*> | The Azure datacenter region where to store your app's information. This example selects the **West US** region. <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
- | **Enable log analytics** | **No** | Available only when you select the **Consumption** logic app resource type. <p><p>Change this option only when you want to enable diagnostic logging. For this example, leave this option unselected. |
+1. On the **Create Logic App** pane, on the **Basics** tab, provide the following basic information about your logic app:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **My-First-LA-RG**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). <br><br>This example creates a logic app named **My-First-Logic-App**. |
+ |||||
+
+1. Before you continue making selections, under **Plan type**, select **Consumption** so that you view only the settings that apply to the Consumption plan-based logic app type. The **Plan type** property specifies the logic app type and billing model to use.
+
+ | Plan type | Description |
+ |--|-|
+ | **Consumption** | This logic app type runs in global, multi-tenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
+ | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
+ |||
+
+1. Now continue making the following selections:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <p>**Note**: If your subscription is associated with an [integration service environment](connect-virtual-network-vnet-isolated-environment-overview.md), this list includes those environments. |
+ | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <p><p>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. |
||||
+ When you're done, your settings look similar to this version:
+ ![Screenshot showing the Azure portal and logic app resource creation page with details for new logic app.](./media/quickstart-create-first-logic-app-workflow/create-logic-app-settings.png) 1. When you're ready, select **Review + Create**.
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
The hosts in this section are used to install R packages, and are required durin
| - | - | | **cloud.r-project.org** | Used when installing CRAN packages. |
-### Azure Kubernetes Services
-
-When using Azure Kubernetes Service with Azure Machine Learning, the following traffic must be allowed:
-
-* General inbound/outbound requirements for AKS as described in the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) article.
-* __Outbound__ to mcr.microsoft.com.
-* When deploying a model to an AKS cluster, use the guidance in the [Deploy ML models to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md#connectivity) article.
- ### Azure Arc enabled Kubernetes <a id="arc-kubernetes"></a>
-Azure Arc enabled Kubernetes clusters depend on Azure Arc connections. Make sure to meet [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements).
-
-The hosts in this section are used to deploy the Azure Machine Learning extension to Kubernetes clusters and submit training and inferencing workloads to the clusters.
+Clusters running behind an outbound proxy server or firewall need additional network configurations. Fulfill [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements) needed by Azure Arc agents. Besides that, the following outbound URLs are required for Azure Machine Learning,
-**Azure Machine Learning extension deployment**
+| Outbound Endpoint| Port | Description|Training |Inference |
+|--|--|--|--|--|
+| *.kusto.windows.net,<br> *.table.core.windows.net, <br>*.queue.core.windows.net | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**|
+| *.azurecr.io | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
+| *.blob.core.windows.net | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
+| *.workspace.\<region\>.api.azureml.ms ,<br> \<region\>.experiments.azureml.net, <br> \<region\>.api.azureml.ms | https:443 | Azure mahince learning service API.|**&check;**|**&check;**|
+| pypi.org | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A|
+| archive.ubuntu.com, <br> security.ubuntu.com,<br> ppa.launchpad.net | http:80 | Required to download the necessary security patches. |**&check;**|N/A|
-Enable outbound access to the following endpoints when deploying the Azure Machine Learning extension to the cluster.
-
-| Destination Endpoint| Port | Use |
-|--|--|--|
-| *.data.mcr.microsoft.com| https:443 | Required for MCR storage backed by the Azure content delivery network (CDN). |
-| quay.io, *.quay.io | https:443 | Quay.io registry, required to pull container images for AML extension components |
-| gcr.io| https:443 | Google cloud repository, required to pull container images for AML extension components |
-| storage.googleapis.com | https:443 | Google cloud storage, gcr images are hosted on |
-| registry-1.docker.io, production.cloudflare.docker.com | https:443 | Docker hub registry, required to pull container images for AML extension components |
-| auth.docker.io| https:443 | Docker repository authentication, required to access docker hub registry |
-| *.kusto.windows.net, *.table.core.windows.net, *.queue.core.windows.net | https:443 | Required to upload and analyze system logs in Kusto |
-
-**Training workloads only**
-
-Enable outbound access to the following endpoints to submit training workloads to the cluster.
-
-| Destination Endpoint| Port | Use |
-|--|--|--|
-| pypi.org | https:443 | Python package index, to install pip packages used to initialize the job environment |
-| archive.ubuntu.com, security.ubuntu.com, ppa.launchpad.net | http:80 | This address lets the init container download the required security patches and updates |
-
-**Training and inferencing workloads**
-
-In addition to the endpoints for training workloads, enable outbound access for the following endpoints to submit training and inferencing workloads.
-
-| Destination Endpoint| Port | Use |
-|--|--|--|
-| *.azurecr.io | https:443 | Azure container registry, required to pull container images to host training or inference jobs|
-| *.blob.core.windows.net | https:443 | Azure blob storage, required to fetch machine learning project scripts, container images and job logs/metrics |
-| *.workspace.\<region\>.api.azureml.ms , \<region\>.experiments.azureml.net, \<region\>.api.azureml.ms | https:443 | Azure machine learning service api, required to communicate with AML |
+> [!NOTE]
+> `<region>` is the lowcase full spelling of Azure Region, for example, eastus, southeastasia.
### Visual Studio Code hosts
machine-learning How To Attach Arc Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-arc-kubernetes.md
To use Azure Kubernetes Service clusters for Azure Machine Learning training and
Before deploying the Azure Machine Learning extension on Azure Kubernetes Service clusters, you have to: - Register the feature in your AKS cluster. For more information, see [Azure Kubernetes Service prerequisites](#aks-prerequisites).-- Configure inbound and outbound network traffic. For more information, see [Configure inbound and outbound network traffic (AKS)](how-to-access-azureml-behind-firewall.md#azure-kubernetes-services-1). To deploy the Azure Machine Learning extension on AKS clusters, see the [Deploy Azure Machine Learning extension](#deploy-azure-machine-learning-extension) section.
To deploy the Azure Machine Learning extension on AKS clusters, see the [Deploy
> [!NOTE] > For AKS clusters, connecting them to Azure Arc is **optional**.
-* Fulfill [Azure Arc network requirements](../azure-arc/kubernetes/quickstart-connect-cluster.md?tabs=azure-cli#meet-network-requirements)
-
- > [!IMPORTANT]
- > Clusters running behind an outbound proxy server or firewall need additional network configurations.
- >
- > For more information, see [Configure inbound and outbound network traffic (Azure Arc-enabled Kubernetes)](how-to-access-azureml-behind-firewall.md#arc-kubernetes).
-
+* Clusters running behind an outbound proxy server or firewall need additional network configurations. See [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md#azure-arc-enabled-kubernetes-).
* Fulfill [Azure Arc-enabled Kubernetes cluster extensions prerequisites](../azure-arc/kubernetes/extensions.md#prerequisites). * Azure CLI version >= 2.24.0 * Azure CLI k8s-extension extension version >= 1.0.0
To deploy the Azure Machine Learning extension on AKS clusters, see the [Deploy
az login az account set --subscription <your-subscription-id> ``` - ### Azure Kubernetes Service (AKS) <a id="aks-prerequisites"></a> For AKS clusters, connecting them to Azure Arc is **optional**.
However, you have to register the feature in your cluster. Use the following com
```azurecli az feature register --namespace Microsoft.ContainerService -n AKS-ExtensionManager ```
+> [!NOTE]
+> For more information, see [Deploy and manage cluster extensions for Azure Kubernetes Service (AKS)](../aks/cluster-extensions.md)
### Azure RedHat OpenShift Service (ARO) and OpenShift Container Platform (OCP) only
az feature register --namespace Microsoft.ContainerService -n AKS-ExtensionManag
Azure Arc-enabled Kubernetes has a cluster extension functionality that enables you to install various agents including Azure Policy definitions, monitoring, machine learning, and many others. Azure Machine Learning requires the use of the *Microsoft.AzureML.Kubernetes* cluster extension to deploy the Azure Machine Learning agent on the Kubernetes cluster. Once the Azure Machine Learning extension is installed, you can attach the cluster to an Azure Machine Learning workspace and use it for the following scenarios:
-* [Training](#training)
+* [Training only](#training)
* [Real-time inferencing only](#inferencing) * [Training and inferencing](#training-inferencing)
You can use ```--config``` or ```--config-protected``` to specify list of key-va
| Configuration Setting Key Name | Description | Training | Inference | Training and Inference | |--|--|--|--|--| |```enableTraining``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning model training support. | **&check;**| N/A | **&check;** |
- |```logAnalyticsWS``` |```True``` or ```False```, default ```False```. AzureML extension integrates with Azure LogAnalytics Workspace to provide log viewing and analysis capability through LogAnalytics Workspace. This setting must be explicitly set to ```True``` if customer wants to use this capability. LogAnalytics Workspace cost may apply. |Optional |Optional |Optional |
- |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```True```. Nvidia Device Plugin is required for ML workloads on Nvidia GPU hardware. By default, AzureML extension deployment will install Nvidia Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this configuration setting to False if Nvidia Device Plugin installation is not required (either it is installed already or there is no plan to use GPU for workload). | Optional |Optional |Optional |
| ```enableInference``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference support. |N/A| **&check;** | **&check;** | | ```allowInsecureConnections``` |```True``` or ```False```, default False. This **must** be set to ```True``` for AzureML extension deployment with HTTP endpoints support for inference, when ```sslCertPemFile``` and ```sslKeyPemFile``` are not provided. |N/A| Optional | Optional | | ```privateEndpointNodeport``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML deployment with Machine Learning inference private endpoints support using serviceType nodePort. | N/A| Optional | Optional | | ```privateEndpointILB``` |```True``` or ```False```, default ```False```. **Must** be set to ```True``` for AzureML extension deployment with Machine Learning inference private endpoints support using serviceType internal load balancer | N/A| Optional | Optional |
+ |```sslSecret```| The Kubernetes secret under azureml namespace to store `cert.pem` (PEM-encoded SSL cert) and `key.pem` (PEM-encoded SSL key), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to ```False```. Use this config or give static cert and key file path in configuration protected settings.|N/A| Optional | Optional |
+ |```sslCname``` |A SSL CName to use if enabling SSL validation on the cluster. | N/A | Optional | Optional |
| ```inferenceLoadBalancerHA``` |```True``` or ```False```, default ```True```. By default, AzureML extension will deploy three ingress controller replicas for high availability, which requires at least three workers in a cluster. Set this config to ```False``` if you have fewer than three workers and want to deploy AzureML extension for development and testing only, in this case it will deploy one ingress controller replica only. | N/A| Optional | Optional | |```openshift``` | ```True``` or ```False```, default ```False```. Set to ```True``` if you deploy AzureML extension on ARO or OCP cluster. The deployment process will automatically compile a policy package and load policy package on each node so AzureML services operation can function properly. | Optional| Optional | Optional | |```nodeSelector``` | Set the node selector so the extension components and the training/inference workloads will only be deployed to the nodes with all specified selectors. Usage: `nodeSelector.key=value`, support multiple selectors. Example: `nodeSelector.node-purpose=worker nodeSelector.node-region=eastus`| Optional| Optional | Optional |
- |```sslCname``` |The cname for if SSL is enabled. | N/A | Optional | Optional |
+ |```installNvidiaDevicePlugin``` | ```True``` or ```False```, default ```True```. Nvidia Device Plugin is required for ML workloads on Nvidia GPU hardware. By default, AzureML extension deployment will install Nvidia Device Plugin regardless Kubernetes cluster has GPU hardware or not. User can specify this configuration setting to ```False``` if Nvidia Device Plugin installation is not required (either it is installed already or there is no plan to use GPU for workload). | Optional |Optional |Optional |
+ |```reuseExistingPromOp```|```True``` or ```False```, default ```False```. AzureML extension needs prometheus operator to manage prometheus. Set to ```True``` to reuse existing prometheus operator. | Optional| Optional | Optional |
+ |```logAnalyticsWS``` |```True``` or ```False```, default ```False```. AzureML extension integrates with Azure LogAnalytics Workspace to provide log viewing and analysis capability through LogAnalytics Workspace. This setting must be explicitly set to ```True``` if customer wants to use this capability. LogAnalytics Workspace cost may apply. |Optional |Optional |Optional |
|Configuration Protected Setting Key Name |Description |Training |Inference |Training and Inference |--|--|--|--|--|
- | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to False. | N/A| Optional | Optional |
+ | ```sslCertPemFile```, ```sslKeyPemFile``` |Path to SSL certificate and key file (PEM-encoded), required for AzureML extension deployment with HTTPS endpoint support for inference, when ``allowInsecureConnections`` is set to ```False```. | N/A| Optional | Optional |
> [!WARNING] > If Nvidia Device Plugin, is already installed in your cluster, reinstalling them may result in an extension installation error. Set `installNvidiaDevicePlugin` to `False` to prevent deployment errors.
kubectl get pods -n azureml
``` ## Update Azure Machine Learning extension
-Use ```k8s-extension update``` CLI command to update the mutable properties of Azure Machine Learning extension. For more information, see the [`k8s-extension update` CLI command documentation](/cli/azure/k8s-extension#az_k8s_extension_update).
+Use ```k8s-extension update``` CLI command to update the mutable properties of Azure Machine Learning extension. For more information, see the [`k8s-extension update` CLI command documentation](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_update&preserve-view=true).
1. Azure Arc supports update of ``--auto-upgrade-minor-version``, ``--version``, ``--configuration-settings``, ``--configuration-protected-settings``. 2. For configurationSettings, only the settings that require update need to be provided. If the user provides all settings, they would be merged/overwritten with the provided values.
Use ```k8s-extension update``` CLI command to update the mutable properties of
## Delete Azure Machine Learning extension
-Use [`k8s-extension delete`](/cli/azure/k8s-extension#az_k8s_extension_delete) CLI command to delete the Azure Machine Learning extension.
+Use [`k8s-extension delete`](/cli/azure/k8s-extension?view=azure-cli-latest#az_k8s_extension_delete&preserve-view=true) CLI command to delete the Azure Machine Learning extension.
It takes around 10 minutes to delete all components deployed to the Kubernetes cluster. Run `kubectl get pods -n azureml` to check if all components were deleted.
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
* [rolling-origin cross validation](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb) * [configurable lags](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb) * [rolling window aggregate features](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb)
-* [DNN](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-beer-remote/auto-ml-forecasting-beer-remote.ipynb)
+ ## Next steps
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
Once you have exported your labeled data to an Azure Machine Learning dataset, y
## Next steps
-* [Tutorial: Create your first image classification labeling project](tutorial-labeling.md).
+<!-- * [Tutorial: Create your first image classification labeling project](tutorial-labeling.md). -->
* [How to tag images](how-to-label-data.md)
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
- If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace show` command to show the workspace information.
+ If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace show` command to show the workspace information. The v1 extension does not return this information.
```azurecli-interactive az ml workspace show -w yourworkspacename -g resourcegroupname --query 'container_registry'
Azure Container Registry can be configured to use a private endpoint. Use the fo
# [Azure CLI](#tab/cli)
- [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-
- If you've [installed the Machine Learning extension v2 for Azure CLI](how-to-configure-cli.md), you can use the `az ml workspace update` command to set a build compute. In the following command, replace `myworkspace` with your workspace name, `myresourcegroup` with the resource group that contains the workspace, and `mycomputecluster` with the compute cluster name:
+ You can use the `az ml workspace update` command to set a build compute. The command is the same for both the v1 and v2 Azure CLI extensions for machine learning. In the following command, replace `myworkspace` with your workspace name, `myresourcegroup` with the resource group that contains the workspace, and `mycomputecluster` with the compute cluster name:
```azurecli
- az ml workspace update \
- -n myworkspace \
- -g myresourcegroup \
- -i mycomputecluster
+ az ml workspace update --name myworkspace --resource-group myresourcegroup --image-build-compute mycomputecluster
``` # [Python SDK](#tab/python)
machine-learning Tutorial Labeling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-labeling.md
- Title: "Tutorial: Create a labeling project for image classification"-
-description: Learn how to manage the process of labeling images so they can be used in multi-class image classification models.
------- Previously updated : 10/21/2021-
-# Customer intent: As a project administrator, I want to manage the process of labeling images so they can be used in machine learning models.
-# THIS ARTICLE SHOWS A SAS TOKEN THAT EXPIRES IN 2025
--
-# Tutorial: Create a labeling project for multi-class image classification
--
-This tutorial shows you how to manage the process of labeling (also referred to as tagging) images to be used as data for building machine learning models. Data labeling in Azure Machine Learning is in public preview.
-
-If you want to train a machine learning model to classify images, you need hundreds or even thousands of images that are correctly labeled. Azure Machine Learning helps you manage the progress of your private team of domain experts as they label your data.
-
-In this tutorial, you'll use images of cats and dogs. Since each image is either a cat or a dog, this is a *multi-class* labeling project. You'll learn how to:
-
-> [!div class="checklist"]
->
-> * Create an Azure storage account and upload images to the account.
-> * Create an Azure Machine Learning workspace.
-> * Create a multi-class image labeling project.
-> * Label your data. Either you or your labelers can perform this task.
-> * Complete the project by reviewing and exporting the data.
-
-## Prerequisites
-
-* An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-
-## Create a workspace
-
-An Azure Machine Learning workspace is a foundational resource in the cloud that you use to experiment, train, and deploy machine learning models. It ties your Azure subscription and resource group to an easily consumed object in the service.
-
-There are many [ways to create a workspace](how-to-manage-workspace.md). In this tutorial, you create a workspace via the Azure portal, a web-based console for managing your Azure resources.
--
-## Start a labeling project
-
-Next you will manage the data labeling project in Azure Machine Learning studio, a consolidated interface that includes machine learning tools to perform data science scenarios for data science practitioners of all skill levels. The studio is not supported on Internet Explorer browsers.
-
-1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
-
-1. Select your subscription and the workspace you created.
-
-### <a name="create-datastore"></a>Create a datastore
-
-Azure Machine Learning datastores are used to store connection information, like your subscription ID and token authorization. Here you use a datastore to connect to the storage account that contains the images for this tutorial.
-
-1. On the left side of your workspace, select **Datastores**.
-
-1. Select **+ New datastore**.
-
-1. Fill out the form with these settings:
-
- Field|Description
- |
- Datastore name | Give the datastore a name. Here we use **labeling_tutorial**.
- Datastore type | Select the type of storage. Here we use **Azure Blob Storage**, the preferred storage for images.
- Account selection method | Select **Enter manually**.
- URL | `https://azureopendatastorage.blob.core.windows.net/openimagescontainer`
- Authentication type | Select **SAS token**.
- Account key | `ZPlDx0bFHFEqwoy8/B/ZZg1YKi/+cIiPamOPUrRptWbvkO6d84n4loitnSMorv/AxrvE0s86cUr6rULWaSGA2A==`
-
-1. Select **Create** to create the datastore.
-
-### Create a labeling project
-
-Now that you have access to the data you want to have labeled, create your labeling project.
-
-1. At the top of the page, select **Projects**.
-
-1. Select **+ Add project**.
-
- :::image type="content" source="media/tutorial-labeling/create-project.png" alt-text="Create a project":::
-
-### Project details
-
-1. Use the following input for the **Project details** form:
-
- Field|Description
- |
- Project name | Give your project a name. Here we'll use **tutorial-cats-n-dogs**.
- Labeling task type | Select **Image Classification Multi-class**.
-
- Select **Next** to continue creating the project.
-
-### Add workforce (optional)
-
-Select **Next** to continue. You won't be using an external workforce for this tutorial.
-
-### Select or create a dataset
-
-1. On the **Select or create a dataset** form, select the second choice, **Create a dataset**, then select the link **From datastore**.
-
-1. Use the following input for the **Create dataset from datastore** form:
-
- 1. On the **Basic info** form, add a name, here we'll use **images-for-tutorial**. Add a description if you wish. Then select **Next**.
- 1. On the **Datastore selection** form, select **Previously created datastore**, then click on the datastore name and select **Select datastore**.
- 1. On the next page, verify that the currently selected datastore is correct. If not, select **Previously created datastore** and repeat the prior step.
- 1. Next, still on the **Datastore selection** form, select **Browse** and then select **MultiClass - DogsCats**. Select **Save** to use **/MultiClass - DogsCats** as the path.
- 1. Select **Next** to confirm details and then **Create** to create the dataset.
- 1. Select the circle next to the dataset name in the list, for example **images-for-tutorial**.
-
-1. Select **Next** to continue creating the project.
-
-### Incremental refresh
-
-If you plan to add new images to your dataset, incremental refresh will find these new images and add them to your project. When you enable this feature, the project will periodically check for new images. You won't be adding new images to the datastore for this tutorial, so leave this feature unchecked.
-
-Select **Next** to continue.
-
-### Label classes
-
-1. On the **Label classes** form, type a label name, then select **+Add label** to type the next label. For this project, the labels are **Cat**, **Dog**, and **Uncertain**.
-
-1. Select **Next** when have added all the labels.
-
-### Labeling instructions
-
-1. On the **Labeling instructions** form, you can provide a link to a website that provides detailed instructions for your labelers. We'll leave it blank for this tutorial.
-
-1. You can also add a short description of the task directly on the form. Type **Labeling tutorial - Cats & Dogs.**
-
-1. Select **Next**.
-
-1. In the **ML assisted labeling** section, leave the checkbox unchecked. ML assisted labeling requires more data than you'll be using in this tutorial.
-
-1. Select **Create project**.
-
-This page doesn't automatically refresh. After a pause, manually refresh the page until the project's status changes to **Created**.
-
-## Start labeling
-
-You have now set up your Azure resources, and configured a data labeling project. It's time to add labels to your data.
-
-### Tag the images
-
-In this part of the tutorial, you'll switch roles from the *project administrator* to that of a *labeler*. Anyone who has contributor access to your workspace can become a labeler.
-
-1. In [Machine Learning studio](https://ml.azure.com), select **Data labeling** on the left-hand side to find your project.
-
-1. Select **Label link** for the project.
-
-1. Read the instructions, then select **Tasks**.
-
-1. Select a thumbnail image on the right to display the number of images you wish to label in one go. You must label all these images before you can move on. Only switch layouts when you have a fresh page of unlabeled data. Switching layouts clears the page's in-progress tagging work.
-
-1. Select one or more images, then select a tag to apply to the selection. The tag appears below the image. Continue to select and tag all images on the page. To select all the displayed images simultaneously, select **Select all**. Select at least one image to apply a tag.
--
- > [!TIP]
- > You can select the first nine tags by using the number keys on your keyboard.
-
-1. Once all the images on the page are tagged, select **Submit** to submit these labels.
-
- ![Tagging images](media/tutorial-labeling/catsndogs.gif)
-
-1. After you submit tags for the data at hand, Azure refreshes the page with a new set of images from the work queue.
-
-## Complete the project
-
-Now you'll switch roles back to the *project administrator* for the labeling project.
-
-As a manager, you may want to review the work of your labeler.
-
-### Review labeled data
-
-1. In [Machine Learning studio](https://ml.azure.com), select **Data labeling** on the left-hand side to find your project.
-
-1. Select the project name link.
-
-1. The Dashboard shows you the progress of your project.
-
-1. At the top of the page, select **Data**.
-
-1. On the left side, select **Labeled data** to see your tagged images.
-
-1. When you disagree with a label, select the image and then select **Reject** at the bottom of the page. The tags will be removed and the image is put back in the queue of unlabeled images.
-
-### Export labeled data
-
-You can export the label data for Machine Learning experimentation at any time. Users often export multiple times and train different models, rather than wait for all the images to be labeled.
-
-Image labels can be exported in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset. The dataset format makes it easy to use for training in Azure Machine Learning.
-
-1. In [Machine Learning studio](https://ml.azure.com), select **Data labeling** on the left-hand side to find your project.
-
-1. Select the project name link.
-
-1. Select **Export** and choose **Export as Azure ML Dataset**.
-
- The status of the export appears just below the **Export** button.
-
-1. Once the labels are successfully exported, select **Datasets** on the left side to view the results.
-
-## Clean up resources
---
-## Next steps
-
-> [!div class="nextstepaction"]
-> [Train a machine learning image recognition model](how-to-use-labeled-dataset.md).
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
In this tutorial, you learn the following tasks:
- Install [Visual Studio Code](https://code.visualstudio.com/docs/setup/setup-overview), a lightweight, cross-platform code editor. - Azure Machine Learning Studio Visual Studio Code extension. For install instructions see the [Setup Azure Machine Learning Visual Studio Code extension guide](./how-to-setup-vs-code.md) - CLI (v2) (preview). For installation instructions, see [Install, set up, and use the CLI (v2) (preview)](how-to-configure-cli.md)
+- Clone the community driven repository
+ ```bash
+ git clone https://github.com/Azure/azureml-examples.git
+ ```
## Understand the code
The code for this tutorial uses TensorFlow to train an image classification mach
![MNIST Digits](./media/tutorial-train-deploy-image-classification-model-vscode/digits.png)
-Get the code for this tutorial by downloading and unzipping the [Azure ML Examples repository](https://github.com/Azure/azureml-examples/archive/refs/heads/main.zip) anywhere on your computer.
- ## Create a workspace The first thing you have to do to build an application in Azure Machine Learning is to create a workspace. A workspace contains the resources to train models as well as the trained models themselves. For more information, see [what is a workspace](./concept-workspace.md).
-1. Open the *azureml-examples-main/cli/jobs/train/tensorflow/mnist* directory in Visual Studio Code.
+1. Open the *azureml-examples/cli/jobs/single-step/tensorflow/mnist* directory from the community driven repository in Visual Studio Code.
1. On the Visual Studio Code activity bar, select the **Azure** icon to open the Azure Machine Learning view. 1. In the Azure Machine Learning view, right-click your subscription node and select **Create Workspace**. > [!div class="mx-imgBorder"] > ![Create workspace](./media/tutorial-train-deploy-image-classification-model-vscode/create-workspace.png)
-1. A specification file appears. Configure the specification file with the following options.
+1. A specification file appears. Configure the specification file with the following options.
```yml $schema: https://azuremlschemas.azureedge.net/latest/workspace.schema.json
mariadb Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/concepts-compatibility.md
Azure Database for MariaDB uses the community edition of MariaDB server. Therefo
The goal is to support the three most recent versions MariaDB drivers, and efforts with authors from the open source community to constantly improve the functionality and usability of MariaDB drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MariaDB 10.2 is provided in the following table:
-> [!WARNING]
-> The MySQL 8.0.27 client is incompatible with Azure Database for MariaDB - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27).
-- **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** |||| PHP | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.
mariadb Howto Troubleshoot Common Connection Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-troubleshoot-common-connection-issues.md
Generally, connection issues to Azure Database for MariaDB can be classified as
* Transient errors (short-lived or intermittent) * Persistent or non-transient errors (errors that regularly recur)
-> [!WARNING]
-> The MySQL 8.0.27 client is incompatible with Azure Database for MariaDB - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27).
- ## Troubleshoot transient errors Transient errors occur when maintenance is performed, the system encounters an error with the hardware or software, or you change the vCores or service tier of your server. The Azure Database for MariaDB service has built-in high availability and is designed to mitigate these types of problems automatically. However, your application loses its connection to the server for a short period of time of typically less than 60 seconds at most. Some events can occasionally take longer to mitigate, such as when a large transaction causes a long-running recovery.
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-apis-guide.md
The activities below are not sequential. The activity you use is dependent on yo
| <center>Activity | ISV sales activities | Corresponding Marketplace API | Corresponding Marketplace UI | | | | | |
-| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
+| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
| <center>**2. Demand Generation**<br><img src="medi)<br>[Co-Sell Connector for SalesForce CRM](/partner-center/connector-salesforce)<br>[Co-Sell Connector for Dynamics 365 CRM](/partner-center/connector-dynamics) | Product Promotion<br>Lead nurturing<br>Eval, trial & PoC<br>Azure Marketplace and AppSource<br>PC Marketplace Insights<br>PC Co-Sell Opportunities |
-| <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](https://apidocs.microsoft.com/services/partnercenter) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) |
+| <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](/partner-center/) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) |
| <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing |
-| <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics |
+| <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics |
| <center>**6. Contract End**<br><img src="medi)<br>AMA/VM's: auto-renew | Renew or<br>Terminate<br>PC Marketplace Analytics | |
media-services Asset Create Asset How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-create-asset-how-to.md
Title: Upload content to an asset CLI description: The Azure CLI script in this topic shows how to create a Media Services Asset to upload content to. -+ - Previously updated : 02/16/2021 Last updated : 03/01/2022 - # Create an Asset
Follow the steps in [Create a Media Services account](./account-create-how-to.md
## Methods
+## [Portal](#tab/portal/)
+
+Creating assets in the portal is as simple as uploading a file.
++ ## [CLI](#tab/cli/) [!INCLUDE [Create an asset with CLI](./includes/task-create-asset-cli.md)]
-## Example script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/media-services/create-asset/Create-Asset.sh "Create an asset")]
## [REST](#tab/rest/)
media-services Asset Publish Cli How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/asset-publish-cli-how-to.md
- Title: Azure CLI Script Example - Publish an asset
-description: This article demonstrates how to use the Azure CLI script to publish an asset.
------- Previously updated : 08/31/2020----
-# CLI example: Publish an asset
--
-The Azure CLI script in this article shows how to create a Streaming Locator and get Streaming URLs back.
-
-## Prerequisites
-
-[Create a Media Services account](./account-create-how-to.md).
-
-## Example script
-
-[!code-azurecli-interactive[main](../../../cli_scripts/media-services/publish-asset/Publish-Asset.sh "Publish an asset")]
-
-## Next steps
-
-[Media Services overview](media-services-overview.md)
media-services Create Streaming Locator Build Url https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/create-streaming-locator-build-url.md
Title: Create a streaming locator and build URLs description: This article demonstrates how to create a streaming locator and build URLs. - Previously updated : 08/31/2020 Last updated : 03/01/2022 - # Create a streaming locator and build URLs
In Azure Media Services, to build a streaming URL, you need to first create a [S
This article demonstrates how to create a streaming locator and build a streaming URL using Java and .NET SDKs.
-## Prerequisite
+## Prerequisites
Preview [Dynamic packaging](encode-dynamic-packaging-concept.md)
-## Java
-
-```java
-/**
-* Creates a StreamingLocator for the specified asset and with the specified streaming policy name.
-* Once the StreamingLocator is created the output asset is available to clients for playback.
-* @param manager The entry point of Azure Media resource management
-* @param resourceGroup The name of the resource group within the Azure subscription
-* @param accountName The Media Services account name
-* @param assetName The name of the output asset
-* @param locatorName The StreamingLocator name (unique in this case)
-* @return The locator created
-*/
-private static StreamingLocator getStreamingLocator(MediaManager manager, String resourceGroup, String accountName,
- String assetName, String locatorName) {
- // Note that we are using one of the PredefinedStreamingPolicies which tell the Origin component
- // of Azure Media Services how to publish the content for streaming.
- System.out.println("Creating a streaming locator...");
- StreamingLocator locator = manager
- .streamingLocators().define(locatorName)
- .withExistingMediaservice(resourceGroup, accountName)
- .withAssetName(assetName)
- .withStreamingPolicyName("Predefined_ClearStreamingOnly")
- .create();
+## Create a streaming locator
- return locator;
-}
+## [Portal](#tab/portal/)
-/**
-* Checks if the streaming endpoint is in the running state, if not, starts it.
-* @param manager The entry point of Azure Media resource management
-* @param resourceGroup The name of the resource group within the Azure subscription
-* @param accountName The Media Services account name
-* @param locatorName The name of the StreamingLocator that was created
-* @param streamingEndpoint The streaming endpoint.
-* @return List of streaming urls
-*/
-private static List<String> getStreamingUrls(MediaManager manager, String resourceGroup, String accountName,
- String locatorName, StreamingEndpoint streamingEndpoint) {
- List<String> streamingUrls = new ArrayList<>();
-
- ListPathsResponse paths = manager.streamingLocators().listPathsAsync(resourceGroup, accountName, locatorName)
- .toBlocking().first();
-
- for (StreamingPath path: paths.streamingPaths()) {
- StringBuilder uriBuilder = new StringBuilder();
- uriBuilder.append("https://")
- .append(streamingEndpoint.hostName())
- .append("/")
- .append(path.paths().get(0));
-
- streamingUrls.add(uriBuilder.toString());
- }
- return streamingUrls;
-}
-```
-See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-java/blob/master/VideoEncoding/EncodingWithMESPredefinedPreset/src/main/java/sample/EncodingWithMESPredefinedPreset.java)
+## [.NET](#tab/net/)
-## .NET
+## Using .NET
```csharp /// <summary>
private static async Task<IList<string>> GetStreamingUrlsAsync(
See the full code sample: [EncodingWithMESPredefinedPreset](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/VideoEncoding/Encoding_PredefinedPreset/Program.cs)
-## See also
-
-* [Create filters with .NET](filters-dynamic-manifest-dotnet-how-to.md)
-* [Create filters with REST](filters-dynamic-manifest-rest-howto.md)
-* [Create filters with CLI](filters-dynamic-manifest-cli-how-to.md)
-
-## Next steps
-
-[Protect your content with DRM](drm-protect-with-drm-tutorial.md).
+
media-services Job Create How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/job-create-how-to.md
+
+ Title: Create a job with Media Services
+description: The article shows how to create a Job using different methods.
+++++ Last updated : 03/01/2022+++
+# CLI example: Create and submit a job
++
+In Media Services v3, when you submit Jobs to process your videos, you have to tell Media Services where to find the input video. One of the options is to specify an HTTPS URL as a job input (as shown in this article).
+
+## Prerequisites
+
+[Create a Media Services account](./account-create-how-to.md).
+
+## [Portal](#tab/rest/)
++
+## [CLI](#tab/cli/)
+
+## Example script
+
+When you run `az ams job start`, you can set a label on the job's output. The label can later be used to identify what this output asset is for.
+
+- If you assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=labelΓÇ¥
+- If you do not assign a value to the label, set ΓÇÿ--output-assetsΓÇÖ to ΓÇ£assetname=ΓÇ¥.
+ Notice that you add "=" to the `output-assets`.
+
+```azurecli
+az ams job start \
+ --name testJob001 \
+ --transform-name testEncodingTransform \
+ --base-uri 'https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/' \
+ --files 'Ignite-short.mp4' \
+ --output-assets testOutputAssetName= \
+ -a amsaccount \
+ -g amsResourceGroup
+```
+
+You get a response similar to this:
+
+```
+{
+ "correlationData": {},
+ "created": "2019-02-15T05:08:26.266104+00:00",
+ "description": null,
+ "id": "/subscriptions/<id>/resourceGroups/amsResourceGroup/providers/Microsoft.Media/mediaservices/amsaccount/transforms/testEncodingTransform/jobs/testJob001",
+ "input": {
+ "baseUri": "https://nimbuscdn-nimbuspm.streaming.mediaservices.windows.net/2b533311-b215-4409-80af-529c3e853622/",
+ "files": [
+ "Ignite-short.mp4"
+ ],
+ "label": null,
+ "odatatype": "#Microsoft.Media.JobInputHttp"
+ },
+ "lastModified": "2019-02-15T05:08:26.266104+00:00",
+ "name": "testJob001",
+ "outputs": [
+ {
+ "assetName": "testOutputAssetName",
+ "error": null,
+ "label": "",
+ "odatatype": "#Microsoft.Media.JobOutputAsset",
+ "progress": 0,
+ "state": "Queued"
+ }
+ ],
+ "priority": "Normal",
+ "resourceGroup": "amsResourceGroup",
+ "state": "Queued",
+ "type": "Microsoft.Media/mediaservices/transforms/jobs"
+}
+```
++
media-services Stream Manage Streaming Endpoints How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/stream-manage-streaming-endpoints-how-to.md
Title: Manage streaming endpoints description: This article demonstrates how to manage streaming endpoints with Azure Media Services v3.
-writer: juliako
editor: '' - Previously updated : 08/31/2020 Last updated : 03/01/2022 -
-# Manage streaming endpoints with Media Services v3
+# Manage streaming endpoints with Media Services v3
[!INCLUDE [media services api v3 logo](./includes/v3-hr.md)] When your Media Services account is created a **default** [Streaming Endpoint](stream-streaming-endpoint-concept.md) is added to your account in the **Stopped** state. To start streaming your content and take advantage of [dynamic packaging](encode-dynamic-packaging-concept.md) and [dynamic encryption](drm-content-protection-concept.md), the streaming endpoint from which you want to stream content has to be in the **Running** state. This article shows you how to execute the [start](/rest/api/media/streamingendpoints/start) command on your streaming endpoint using different technologies.
-
+ > [!NOTE] > You are only billed when your Streaming Endpoint is in running state.
-
+ ## Prerequisites
-Review:
+Review:
* [Media Services concepts](concepts-overview.md) * [Streaming Endpoint concept](stream-streaming-endpoint-concept.md) * [Dynamic packaging](encode-dynamic-packaging-concept.md)
-## Use REST
-
-```rest
-POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaservices/slitestmedia10/streamingEndpoints/myStreamingEndpoint1/start?api-version=2018-07-01
-```
+## [Portal](#tab/portal/)
-For more information, see:
+## Use the Azure portal
-* The [start a StreamingEndpoint](/rest/api/media/streamingendpoints/start) reference documentation.
-* Starting a streaming endpoint is an asynchronous operation.
-
- For information about how to monitor long-running operations, see [Long-running operations](media-services-apis-overview.md).
-* This [Postman collection](https://github.com/Azure-Samples/media-services-v3-rest-postman/blob/master/Postman/Media%20Services%20v3.postman_collection.json) contains examples of multiple REST operations, including on how to start a streaming endpoint.
-
-## Use the Azure portal
-
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Go to your Azure Media Services account. 1. In the left pane, select **Streaming Endpoints**. 1. Select the streaming endpoint you want to start, and then select **Start**.
+## [CLI](#tab/CLI/)
+ ## Use the Azure CLI ```cli
az ams streaming-endpoint start [--account-name]
For more information, see [az ams streaming-endpoint start](/cli/azure/ams/streaming-endpoint#az_ams_streaming_endpointstart).
-## Use SDKs
+## [REST](#tab/rest/)
-### Java
-
-```java
-if (streamingEndpoint != null) {
-// Start The Streaming Endpoint if it is not running.
-if (streamingEndpoint.resourceState() != StreamingEndpointResourceState.RUNNING) {
- manager.streamingEndpoints().startAsync(config.getResourceGroup(), config.getAccountName(), STREAMING_ENDPOINT_NAME).await();
-}
+## Use REST
+
+```rest
+POST https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/mediaresources/providers/Microsoft.Media/mediaservices/slitestmedia10/streamingEndpoints/myStreamingEndpoint1/start?api-version=2018-07-01
```
-See the complete [Java code sample](https://github.com/Azure-Samples/media-services-v3-java/blob/master/DynamicPackagingVODContent/StreamHLSAndDASH/src/main/java/sample/StreamHLSAndDASH.java#L128).
+For more information, see:
+
+* The [start a StreamingEndpoint](/rest/api/media/streamingendpoints/start) reference documentation.
+* Starting a streaming endpoint is an asynchronous operation.
+
+ For information about how to monitor long-running operations, see [Long-running operations](media-services-apis-overview.md).
+* This [Postman collection](https://github.com/Azure-Samples/media-services-v3-rest-postman/blob/master/Postman/Media%20Services%20v3.postman_collection.json) contains examples of multiple REST operations, including on how to start a streaming endpoint.
+
+## [.NET](#tab/net/)
-### .NET
+## Use .NET
```csharp StreamingEndpoint streamingEndpoint = await client.StreamingEndpoints.GetAsync(config.ResourceGroup, config.AccountName, DefaultStreamingEndpointName);
if (streamingEndpoint != null)
See the complete [.NET code sample](https://github.com/Azure-Samples/media-services-v3-dotnet/blob/main/Streaming/StreamHLSAndDASH/Program.cs#L112). -
-## Next steps
-
-* [Media Services v3 OpenAPI Specification (Swagger)](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mediaservices/resource-manager/Microsoft.Media/stable/2018-07-01)
-* [Streaming Endpoint operations](/rest/api/media/streamingendpoints)
media-services Transform Create Transform How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/transform-create-transform-how-to.md
The Azure CLI script in this article shows how to create a transform. Transforms
[Create a Media Services account](./account-create-how-to.md).
-## [CLI](#tab/cli/)
+## Code snippets
-> [!NOTE]
-> You can only specify a path to a custom Standard Encoder preset JSON file for [StandardEncoderPreset](/rest/api/medi) example.
->
-> You cannot pass a file name when using [BuiltInStandardEncoderPreset](/rest/api/media/transforms/createorupdate#builtinstandardencoderpreset).
+## [Portal](#tab/portal/)
-## Example script
-[!code-azurecli-interactive[main](../../../cli_scripts/media-services/create-transform/Create-Transform.sh "Create a transform")]
## [REST](#tab/rest/)
The Azure CLI script in this article shows how to create a transform. Transforms
-## Next steps
-
media-services Video On Demand Simple Portal Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/media-services/latest/video-on-demand-simple-portal-quickstart.md
description: This article shows you how to do the basic steps for delivering vid
- Previously updated : 02/16/2022 Last updated : 03/01/2022
This article shows you how to do the basic steps for delivering a basic video on
> [!NOTE] > You will be switching between several browser tabs or windows during this process. The below steps assume that you have your browser set to open tabs. Keep them all open.
-## Upload videos
-
-You should have a media services account, a storage account, and a default streaming endpoint.
-
-1. In the portal, navigate to the Media Services account that you just created.
-1. Select **Assets**. Assets are the containers that are used to house your media content.
-1. Select **Upload**. The Upload new assets screen will appear.
-1. Select the storage account you created for the Media Services account from the **Storage account** dropdown menu. It should be selected by default.
-1. Select the **file folder icon** next to the Upload files field.
-1. Select the media files you want to use. An asset will be created for every video you upload. The name of the asset will start with the name of the video and will be appended with a unique identifier. You *could* upload the same video twice and it will be located in two different assets.
-1. You must agree to the statement "I have all the rights to use the content/file, and agree that it will be handled per the Online Services Terms and the Microsoft Privacy Statement." Select **I agree and upload.**
-1. Select **Continue upload and close**, or **Close** if you want to watch the video upload progress.
-1. Repeat this process for each of the files you want to stream.
-
-## Create a transform
-> [!IMPORTANT]
-> You must encode your files with a transform in order to stream them, even if they have been encoded locally. The Media Services encoding process creates the manifest files needed for streaming.
+<!-- ## Create a transform -->
-You'll now create a transform that uses a Built-in preset, which is like a recipe for encoding.
-
-1. Select **Transforms + jobs**.
-1. Select **Add transform**. The Add transform screen will appear.
-1. Enter a transform name in the **Transform name** field.
-1. Select the **Encoding** radio button.
-1. Select ContentAwareEncoding from the **Built-in preset name** dropdown list.
-1. Select **Add**.
Stay on this screen for the next steps.
-## Create a job
+<!-- ## Create a job -->
Next, you'll create a job which is for telling Media Services which transform to run on files within an asset. The asset you choose will be the input asset. The job will create an output asset to contain the encoded files as well as the manifest.
-1. Select **Add job**. The Create a job screen will appear.
-1. For the **Input source**, the **Asset** radio button should be selected by default. If not, select it now.
-1. Select **Select an existing asset** and choose one of the assets that was just created when you uploaded your videos. The Select an asset screen will appear.
-1. Select one of the assets in the list. You can only select one at a time for the job.
-1. Select the **Use existing** radio button.
-1. Select the transform that you created earlier from the **Transform** dropdown list.
-1. Under Configure output, default settings will be autopopulated, for this exercise leave them as they are.
-1. Select **Create**.
-1. Select **Transforms + Jobs**.
-1. You'll see the name of the transform you chose for the job. Select the transform to see the status of the job.
-1. Select the job listed under **Name** in the table of jobs. The job detail screen will open.
-1. Select the output asset from the **Outputs** list. The asset screen will open.
-1. Select the link for the asset next to Storage container. A new browser tab will open and You'll see the results of the job that used the transform. There should be several files in the output asset including:
- 1. Encoded video files with.mpi and .mp4 extensions.
- 1. A *XXXX_metadata.json* file.
- 1. A *XXXX_manifest.json* file.
- 1. A *XXXX_.ism* file.
- 1. A *XXXX.isc* file.
- 1. A *ThumbnailXXXX.jpg* file.
-1. Once you've viewed what is in the output asset, close the tab. Go back to the asset browser tab.
-
-## Create a streaming locator
+
+Once you've viewed what is in the output asset, close the tab. Go back to the asset browser tab.
In order to stream your videos you need a streaming locator.
-1. Select **New streaming locator**. The Add streaming locator screen will appear and a default name for the locator will appear. You can change it or leave it as is.
-1. Select *Predefined_ClearStreamingOnly* from the Streaming policy dropdown list. This is a streaming policy that says that the video will be streamed using DASH, HLS and Smooth with no content protection restrictions except that the video canΓÇÖt be downloaded by the viewer. No content key policy is required.
-1. Leave the rest of the settings as they are.
-1. Select **Add**. The video will start playing in the player on the screen, and the **Streaming URL** field will be populated.
-1. Select **Show URLs** in the Streaming locator list. The Streaming URLs screen will appear.
+<!-- ## Create a streaming locator -->
+ On this screen, you'll see that the streaming endpoint that was created when you created your account is in the Streaming endpoint dropdown list along with other data about the streaming locator.
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
[Azure Migrate](migrate-services-overview.md) helps you to discover, assess, and migrate on-premises servers, apps, and data to the Microsoft Azure cloud. This article summarizes new releases and features in Azure Migrate. ## Update (February 2022)-- Azure Migrate is now supported in Azure China. [Learn more](/azure/chin#azure-operations-in-china).
+- Azure Migrate is now supported in Azure China. [Learn more](/azure/china/overview-operations#azure-operations-in-china).
## Update (December 2021) - Support to discover, assess, and migrate VMs from multiple vCenter Servers using a single Azure Migrate appliance. [Learn more](tutorial-discover-vmware.md#start-continuous-discovery).
mysql Concepts Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-compatibility.md
This article describes the drivers and management tools that are compatible with
## MySQL Drivers Azure Database for MySQL uses the world's most popular community edition of MySQL database. As such, it's compatible with a wide variety of programming languages and drivers. The goal is to support the three most recent versions MySQL drivers, and efforts with authors from the open-source community to constantly improve the functionality and usability of MySQL drivers continue. A list of drivers that have been tested and found to be compatible with Azure Database for MySQL 5.6 and 5.7 is provided in the following table:
-> [!WARNING]
-> The MySQL 8.0.27 client is incompatible with Azure Database for MySQL - Single Server. All connections from the MySQL 8.0.27 client created either via mysql.exe or workbench will fail. As a workaround, consider using an earlier version of the client (prior to MySQL 8.0.27) or creating an instance of [Azure Database for MySQL - Flexible Server](./flexible-server/overview.md) instead.
- | **Programming Language** | **Driver** | **Links** | **Compatible Versions** | **Incompatible Versions** | **Notes** | | :-- | : | :-- | :- | : | :-- | | PHP | mysqli, pdo_mysql, mysqlnd | https://secure.php.net/downloads.php | 5.5, 5.6, 7.x | 5.3 | For PHP 7.0 connection with SSL MySQLi, add MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT in the connection string. <br> ```mysqli_real_connect($conn, $host, $username, $password, $db_name, 3306, NULL, MYSQLI_CLIENT_SSL_DONT_VERIFY_SERVER_CERT);```<br> PDO set: ```PDO::MYSQL_ATTR_SSL_VERIFY_SERVER_CERT``` option to false.|
mysql Tutorial Php Database App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-php-database-app.md
[!INCLUDE[applies-to-mysql-flexible-server](../includes/applies-to-mysql-flexible-server.md)] - :::image type="content" source="media/tutorial-php-database-app/complete-checkbox-published.png" alt-text="PHP Web App in Azure with Flexible Server"::: [Azure App Service](../../app-service/overview.md) provides a highly scalable, self-patching web hosting service using the Linux operating system. This tutorial shows how to create a PHP app in Azure and connect it to a MySQL database. When you're finished, you'll have a [Laravel](https://laravel.com/) app running on Azure App Service on Linux. In this tutorial, you learn how to: > [!div class="checklist"]
+>
> * Setup a PHP (Laravel) app with local MySQL > * Create a MySQL Flexible Server > * Connect a PHP app to MySQL Flexible Server
In this tutorial, you learn how to:
> * Update the data model and redeploy the app > * Manage the app in the Azure portal - [!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] ## Prerequisites
quit
<a name="step2"></a> ## Create a PHP app locally+ In this step, you get a Laravel sample application, configure its database connection, and run it locally. ### Clone the sample
DB_USERNAME=root
DB_PASSWORD=<root_password> ```
-For information on how Laravel uses the _.env_ file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
+For information on how Laravel uses the *.env* file, see [Laravel Environment Configuration](https://laravel.com/docs/5.4/configuration#environment-configuration).
### Run the sample locally
-Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the _database/migrations_ directory in the Git repository.
+Run [Laravel database migrations](https://laravel.com/docs/5.4/migrations) to create the tables the application needs. To see which tables are created in the migrations, look in the *database/migrations* directory in the Git repository.
```bash php artisan migrate
Navigate to `http://localhost:8000` in a browser. Add a few tasks in the page.
To stop PHP, type `Ctrl + C` in the terminal. ## Create a MySQL Flexible Server+ In this step, you create a MySQL database in [Azure Database for MySQL Flexible Server](../index.yml). Later, you configure the PHP application to connect to this database. In the [Azure Cloud Shell](../../cloud-shell/overview.md), create a server in with the [`az flexible-server create`](/cli/azure/mysql/server#az_mysql_flexible_server_create) command. ```azurecli-interactive
az mysql flexible-server create --resource-group myResourceGroup --public-acces
``` > [!IMPORTANT]
->- Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
-> - For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
+>
+>* Make a note of the **servername** and **connection string** to use it in the next step to connect and run laravel data migration.
+> * For **IP-Address** argument, provide the IP of your client machine. The server is locked when created and you need to permit access to your client machine to manage the server locally.
### Configure server firewall to allow web app to connect to the server
CREATE DATABASE sampledb;
### Create a user with permissions
-Create a database user called _phpappuser_ and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use _MySQLAzure2020_ as the password.
+Create a database user called *phpappuser* and give it all privileges in the `sampledb` database. For simplicity of the tutorial, use *MySQLAzure2020* as the password.
```sql CREATE USER 'phpappuser' IDENTIFIED BY 'MySQLAzure2020';
In this step, you connect the PHP application to the MySQL database you created
### Configure the database connection
-In the repository root, create an _.env.production_ file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
+In the repository root, create an *.env.production* file and copy the following variables into it. Replace the placeholder _&lt;mysql-server-name>_ in both *DB_HOST* and *DB_USERNAME*.
``` APP_ENV=production
MYSQL_SSL=true
Save the changes. > [!TIP]
-> To secure your MySQL connection information, this file is already excluded from the Git repository (See _.gitignore_ in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
+> To secure your MySQL connection information, this file is already excluded from the Git repository (See *.gitignore* in the repository root). Later, you learn how to configure environment variables in App Service to connect to your database in Azure Database for MySQL. With environment variables, you don't need the *.env* file in App Service.
> ### Configure TLS/SSL certificate
-By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [_.pem_ certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
+By default, MySQL Flexible Server enforces TLS connections from clients. To connect to your MySQL database in Azure, you must use the [*.pem* certificate supplied by Azure Database for MySQL Flexible Server](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem). Download [this certificate](https://dl.cacerts.digicert.com/DigiCertGlobalRootCA.crt.pem)) and place it in the **SSL** folder in the local copy of the sample app repository.
-Open _config/database.php_ and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
+Open *config/database.php* and add the `sslmode` and `options` parameters to `connections.mysql`, as shown in the following code.
```php 'mysql' => [
Open _config/database.php_ and add the `sslmode` and `options` parameters to `co
### Test the application locally
-Run Laravel database migrations with _.env.production_ as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
+Run Laravel database migrations with *.env.production* as the environment file to create the tables in your MySQL database in Azure Database for MySQL. Remember that _.env.production_ has the connection information to your MySQL database in Azure.
```bash php artisan migrate --env=production --force ```
-_.env.production_ doesn't have a valid application key yet. Generate a new one for it in the terminal.
+*.env.production* doesn't have a valid application key yet. Generate a new one for it in the terminal.
```bash php artisan key:generate --env=production --force ```
-Run the sample application with _.env.production_ as the environment file.
+Run the sample application with *.env.production* as the environment file.
```bash php artisan serve --env=production
az appservice plan create --name myAppServicePlan --resource-group myResourceGro
Create a [web app](../../app-service/overview.md#app-service-on-linux) in the myAppServicePlan App Service plan.
-In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az_webapp_create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --linux](/cli/azure/webapp#az_webapp_list_runtimes).
+In the Cloud Shell, you can use the [az webapp create](/cli/azure/webapp#az_webapp_create) command. In the following example, replace _&lt;app-name>_ with a globally unique app name (valid characters are `a-z`, `0-9`, and `-`). The runtime is set to `PHP|7.0`. To see all supported runtimes, run [az webapp list-runtimes --os linux](/cli/azure/webapp#az_webapp_list_runtimes).
```bash az webapp create --resource-group myResourceGroup --plan myAppServicePlan --name <app-name> --runtime "PHP|7.3" --deployment-local-git
You've created an empty new web app, with git deployment enabled.
### Configure database settings
-In App Service, you set environment variables as _app settings_ by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
+In App Service, you set environment variables as *app settings* by using the [`az webapp config appsettings set`](/cli/azure/webapp/config/appsettings#az_webapp_config_appsettings_set) command.
The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_USERNAME`, and `DB_PASSWORD`. Replace the placeholders _&lt;app-name>_ and _&lt;mysql-server-name>_.
The following command configures the app settings `DB_HOST`, `DB_DATABASE`, `DB_
az webapp config appsettings set --name <app-name> --resource-group myResourceGroup --settings DB_HOST="<mysql-server-name>.mysql.database.azure.com" DB_DATABASE="sampledb" DB_USERNAME="phpappuser" DB_PASSWORD="MySQLAzure2017" MYSQL_SSL="true" ```
-You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in _config/database.php_ looks like the following code:
+You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php) method to access the settings. the Laravel code uses an [env](https://laravel.com/docs/5.4/helpers#method-env) wrapper over the PHP `getenv`. For example, the MySQL configuration in *config/database.php* looks like the following code:
```php 'mysql' => [
You can use the PHP [getenv](https://www.php.net/manual/en/function.getenv.php)
Laravel needs an application key in App Service. You can configure it with app settings.
-In the local terminal window, use `php artisan` to generate a new application key without saving it to _.env_.
+In the local terminal window, use `php artisan` to generate a new application key without saving it to *.env*.
```bash php artisan key:generate --show
az webapp config appsettings set --name <app-name> --resource-group myResourceGr
### Set the virtual application path
-[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the _public_ directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to _/public_ instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
+[Laravel application lifecycle](https://laravel.com/docs/5.4/lifecycle) begins in the *public* directory instead of the application's root directory. The default PHP Docker image for App Service uses Apache, and it doesn't let you customize the `DocumentRoot` for Laravel. However, you can use `.htaccess` to rewrite all requests to point to */public* instead of the root directory. In the repository root, an `.htaccess` is added already for this purpose. With it, your Laravel application is ready to be deployed.
For more information, see [Change site root](../../app-service/configure-language-php.md?pivots=platform-linux#change-site-root).
Generate a new database migration for the `tasks` table:
php artisan make:migration add_complete_column --table=tasks ```
-This command shows you the name of the migration file that's generated. Find this file in _database/migrations_ and open it.
+This command shows you the name of the migration file that's generated. Find this file in *database/migrations* and open it.
Replace the `up` method with the following code:
In the local terminal window, run Laravel database migrations to make the change
php artisan migrate ```
-Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see _app/Task.php_) maps to the `tasks` table by default.
+Based on the [Laravel naming convention](https://laravel.com/docs/5.4/eloquent#defining-models), the model `Task` (see *app/Task.php*) maps to the `tasks` table by default.
### Update application logic
Once the `git push` is complete, navigate to the Azure app and test the new func
If you added any tasks, they are retained in the database. Updates to the data schema leave existing data intact. ## Clean up resources+ In the preceding steps, you created Azure resources in a resource group. If you don't expect to need these resources in the future, delete the resource group by running the following command in the Cloud Shell: ```bash
mysql Single Server Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server-whats-new.md
Last updated 06/17/2021
Azure Database for MySQL is a relational database service in the Microsoft cloud. The service is based on the [MySQL Community Edition](https://www.mysql.com/products/community/) (available under the GPLv2 license) database engine and supports versions 5.6(retired), 5.7, and 8.0. [Azure Database for MySQL - Single Server](./overview.md#azure-database-for-mysqlsingle-server) is a deployment mode that provides a fully managed database service with minimal requirements for customizations of database. The Single Server platform is designed to handle most database management functions such as patching, backups, high availability, and security, all with minimal user configuration and control. This article summarizes new releases and features in Azure Database for MySQL - Single Server beginning in January 2021. Listings appear in reverse chronological order, with the most recent updates first.+
+## March 2022
+
+This release of Azure Database for MySQL - Single Server includes the following updates.
+
+**Bug Fixes**
+
+The MySQL 8.0.27 client and newer versions are now compatible with Azure Database for MySQL - Single Server.
+ ## February 2022 This release of Azure Database for MySQL - Single Server includes the following updates.
postgresql Concepts Aad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aad-authentication.md
description: Learn about the concepts of Azure Active Directory for authenticati
+ Last updated 07/23/2020
postgresql Concepts Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/concepts-aks.md
description: Learn about connecting Azure Kubernetes Service (AKS) with Azure Da
+ Last updated 07/14/2020
postgresql Azure Pipelines Deploy Database Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/azure-pipelines-deploy-database-task.md
Title: Azure Pipelines task Azure Database for PostgreSQL Flexible Server description: Enable Azure Database for PostgreSQL Flexible Server CLI task for using with Azure Pipelines++ - -+ Last updated 11/30/2021
postgresql Concepts Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-audit.md
Title: Audit logging - Azure Database for PostgreSQL - Flexible server description: Concepts for pgAudit audit logging in Azure Database for PostgreSQL - Flexible server.-- + ++ Last updated 11/30/2021
Last updated 11/30/2021
Audit logging of database activities in Azure Database for PostgreSQL - Flexible server is available through the PostgreSQL Audit extension: [pgAudit](https://www.pgaudit.org/). pgAudit provides detailed session and/or object audit logging. -- If you want Azure resource-level logs for operations like compute and storage scaling, see the [Azure Activity Log](../../azure-monitor/essentials/platform-logs-overview.md). ## Usage considerations
postgresql Concepts Azure Advisor Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-advisor-recommendations.md
Title: Azure Advisor for PostgreSQL - Flexible Server description: Learn about Azure Advisor recommendations for PostgreSQL - Flexible Server.-- + ++ Last updated 11/16/2021 # Azure Advisor for PostgreSQL - Flexible Server
postgresql Concepts Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-backup-restore.md
Title: Backup and restore in Azure Database for PostgreSQL - Flexible Server description: Learn about the concepts of backup and restore with Azure Database for PostgreSQL - Flexible Server.-- + ++ Last updated 11/30/2021
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
description: Learn about the concepts of business continuity with Azure Database
+ Last updated 11/30/2021 # Overview of business continuity with Azure Database for PostgreSQL - Flexible Server -- **Business continuity** in Azure Database for PostgreSQL - Flexible Server refers to the mechanisms, policies, and procedures that enable your business to continue operating in the face of disruption, particularly to its computing infrastructure. In most of the cases, flexible server will handle the disruptive events happens that might happen in the cloud environment and keep your applications and business processes running. However, there are some events that cannot be handled automatically such as: - User accidentally deletes or updates a row in a table.
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
description: Detailed comparison of features and capabilities between Azure Data
+ Last updated 12/08/2021 # Comparison chart - Azure Database for PostgreSQL Single Server and Flexible Server -- The following table provides a high-level features and capabilities comparisons between Single Server and Flexible Server. | **Feature / Capability** | **Single Server** | **Flexible Server** |
postgresql Concepts Compute Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compute-storage.md
description: This article describes the compute and storage options in Azure Dat
+ Last updated 11/30/2021
postgresql Concepts Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-extensions.md
description: Learn about the available PostgreSQL extensions in Azure Database f
+ Last updated 11/30/2021
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
description: This article describes how to use firewall rules to connect to Azur
+ Last updated 11/30/2021 + # Firewall rules in Azure Database for PostgreSQL - Flexible Server When you're running Azure Database for PostgreSQL - Flexible Server, you have two main networking options. The options are private access (virtual network integration) and public access (allowed IP addresses).
postgresql Concepts High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-high-availability.md
description: Learn about the concepts of zone redundant high availability with A
+ Last updated 11/30/2021
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
description: This article describes the intelligent tuning feature in Azure Data
+ Last updated 11/30/2021
postgresql Concepts Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md
description: This article describes limits in Azure Database for PostgreSQL - Fl
+ Last updated 11/30/2021
postgresql Concepts Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logging.md
description: Describes logging configuration, storage and analysis in Azure Data
+ Last updated 11/30/2021
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
description: Learn about using logical replication and logical decoding in Azure
+ Last updated 11/30/2021
postgresql Concepts Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-maintenance.md
description: This article describes the scheduled maintenance feature in Azure D
+ Last updated 11/30/2021
postgresql Concepts Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-monitoring.md
description: This article describes monitoring and metrics features in Azure Dat
+ Last updated 11/30/2021
postgresql Concepts Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-networking.md
description: Learn about connectivity and networking options in the Flexible Ser
+ Last updated 11/30/2021
postgresql Concepts Pgbouncer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-pgbouncer.md
description: This article provides an overview with the built-in PgBouncer exten
+ Last updated 11/30/2021
postgresql Concepts Query Store Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-best-practices.md
description: This article describes best practices for Query Store in Azure Data
+ Last updated 11/30/2021
postgresql Concepts Query Store Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store-scenarios.md
description: This article describes some scenarios for Query Store in Azure Data
+ Last updated 11/30/2021
postgresql Concepts Query Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-query-store.md
description: This article describes the Query Store feature in Azure Database fo
+ Last updated 11/30/2021
postgresql Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-security.md
description: Learn about security in the Flexible Server deployment option for A
+ ms.devlang: python
postgresql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-server-parameters.md
description: Describes the server parameters in Azure Database for PostgreSQL -
+ Last updated 11/30/2021
postgresql Concepts Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-servers.md
description: This article provides considerations and guidelines for configuring
+ Last updated 11/30/2021
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-supported-versions.md
description: Describes the supported PostgreSQL major and minor versions in Azur
+ Previously updated : 11/30/2021 Last updated : 02/28/2022 # Supported PostgreSQL major versions in Azure Database for PostgreSQL - Flexible Server
Azure Database for PostgreSQL - Flexible Server currently supports the following
## PostgreSQL version 13
-The current minor release is **13.4**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-4.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
+The current minor release is **13.5**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/13/static/release-13-5.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version.
## PostgreSQL version 12
-The current minor release is **12.8**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-8.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **12.9**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/12/static/release-12-9.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 11
-The current minor release is **11.13**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-13.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
+The current minor release is **11.14**. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-14.html) to learn more about improvements and fixes in this release. New servers will be created with this minor version. Your existing servers will be automatically upgraded to the latest supported minor version in your future scheduled maintenance window.
## PostgreSQL version 10 and older
postgresql Connect Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-azure-cli.md
description: This quickstart provides several ways to connect with Azure CLI wit
+ Last updated 11/30/2021
postgresql Connect Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-csharp.md
description: "This quickstart provides a C# (.NET) code sample you can use to co
+ ms.devlang: csharp
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
description: In this quickstart, you learn how to use Java and JDBC with an Azur
+ ms.devlang: java
postgresql Connect Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-python.md
description: This quickstart provides several Python code samples you can use to
+ ms.devlang: python
postgresql How To Configure High Availability Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-high-availability-cli.md
description: This article describes how to configure zone redundant high availab
+ Last updated 11/30/2021
postgresql How To Connect Query Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-query-guide.md
Title: Connect and query - Flexible Server PostgreSQL
description: Links to quickstarts showing how to connect to your Azure Database for PostgreSQL Flexible Server and run queries. +
postgresql How To Connect Scram https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-scram.md
description: Instructions and information on how to configure and connect using
+ Last updated 11/30/2021
postgresql How To Connect Tls Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-connect-tls-ssl.md
description: Instructions and information on how to connect using TLS/SSL in Azu
+ Last updated 11/30/2021
postgresql How To Deploy On Azure Free Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-deploy-on-azure-free-account.md
description: Guidance on how to deploy an Azure Database for PostgreSQL - Flexib
+ Last updated 11/30/2021
postgresql How To Maintenance Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-maintenance-portal.md
description: Learn how to configure scheduled maintenance settings for an Azure
+ Last updated 11/30/2021
postgresql How To Manage Firewall Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-cli.md
description: Create and manage firewall rules for Azure Database for PostgreSQL
+ ms.devlang: azurecli Last updated 11/30/2021
postgresql How To Manage Firewall Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-firewall-portal.md
description: Create and manage firewall rules for Azure Database for PostgreSQL
+ Last updated 11/30/2021
postgresql How To Manage High Availability Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-high-availability-portal.md
description: This article describes how to enable or disable zone redundant high
+ Last updated 11/30/2021
postgresql How To Manage Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-cli.md
description: Learn how to manage an Azure Database for PostgreSQL - Flexible Ser
+ Last updated 11/30/2021
postgresql How To Manage Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-server-portal.md
description: Learn how to manage an Azure Database for PostgreSQL - Flexible Ser
+ Last updated 11/30/2021
postgresql How To Manage Virtual Network Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-cli.md
description: Create and manage virtual networks for Azure Database for PostgreSQ
+ Last updated 11/30/2021
postgresql How To Manage Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-portal.md
description: Create and manage virtual networks for Azure Database for PostgreSQ
+ Last updated 11/30/2021
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-cli.md
description: This article describes how to restart operations in Azure Database
+ Last updated 11/30/2021
postgresql How To Restart Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restart-server-portal.md
description: This article describes how to perform restart operations in Azure D
+ Last updated 11/30/2021
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-cli.md
description: This article describes how to perform restore operations in Azure D
+ Last updated 11/30/2021
postgresql How To Restore Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-restore-server-portal.md
description: This article describes how to perform restore operations in Azure D
+ Last updated 11/30/2021
postgresql How To Scale Compute Storage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-scale-compute-storage-portal.md
description: This article describes how to perform scale operations in Azure Dat
+ Last updated 11/30/2021
postgresql How To Stop Start Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-cli.md
description: This article describes how to stop/start operations in Azure Databa
+ Last updated 11/30/2021
postgresql How To Stop Start Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-stop-start-server-portal.md
description: This article describes how to stop/start operations in Azure Databa
+ Last updated 11/30/2021
postgresql How To Troubleshoot Cli Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-troubleshoot-cli-errors.md
description: This topic gives guidance on troubleshooting common issues with Azu
+ Last updated 11/30/2021
postgresql Howto Alert On Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-alert-on-metrics.md
description: This article describes how to configure and access metric alerts fo
+ Last updated 11/30/2021
postgresql Howto Configure And Access Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-and-access-logs.md
description: How to access database logs for Azure Database for PostgreSQL - Fle
+ Last updated 11/30/2021
postgresql Howto Configure Server Parameters Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-cli.md
description: This article describes how to configure Postgres parameters in Azur
+ ms.devlang: azurecli Last updated 11/30/2021
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
description: This article describes how to configure the Postgres parameters in
+ Last updated 11/30/2021
postgresql Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/overview.md
description: Provides an overview of Azure Database for PostgreSQL - Flexible Se
+ Previously updated : 02/17/2022 Last updated : 02/25/2022
One advantage of running your workload in Azure is global reach. The flexible se
| Australia Southeast | :heavy_check_mark: | :x: | :x: | | Brazil South | :heavy_check_mark: (v3 only) | :x: | :x: | | Canada Central | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Central India | :heavy_check_mark: | :x: | :x: |
+| Central India | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
| Central US | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
-| East Asia | :heavy_check_mark: | :x: | :x: |
+| East Asia | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
| East US | :heavy_check_mark: | :heavy_check_mark: | :x: | | East US 2 | :heavy_check_mark: | :x: $ | :heavy_check_mark: | | France Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Germany West Central | :heavy_check_mark: | :heavy_check_mark: | :x: | | Japan East | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Japan West | :heavy_check_mark: | :x: | :heavy_check_mark: |
-| Korea Central | :heavy_check_mark: | :x: | :x: |
+| Korea Central | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
| Korea South | :heavy_check_mark: | :x: | :x: | | North Central US | :heavy_check_mark: | :x: | :x: | | North Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: |
One advantage of running your workload in Azure is global reach. The flexible se
| Sweden Central | :heavy_check_mark: | :x: | :x: | | Switzerland North | :heavy_check_mark: | :x: | :x: | | UAE North | :heavy_check_mark: | :x: | :x: |
+| US Gov Arizona | :heavy_check_mark: | :x: | :x: |
+| US Gov Virginia | :heavy_check_mark: | :heavy_check_mark: | :x: |
| UK South | :heavy_check_mark: | :heavy_check_mark: | :x: | | UK West | :heavy_check_mark: | :x: | :x: | | West Europe | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | West US | :heavy_check_mark: | :x: | :x: | | West US 2 | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| West US 3 | :heavy_check_mark: | :x: | :x: |
+| West US 3 | :heavy_check_mark: | :heavy_check_mark: ** | :x: |
-$ New Zone-redundant high availability deployments are temporarily blocked in this region. Already provisioned HA servers are fully supported.
+$ New Zone-redundant high availability deployments are temporarily blocked in these regions. Already provisioned HA servers are fully supported.
+
+** Zone-redundant high availability can now be deployed when you provision new servers in these regions. Pre-existing servers deployed in AZ with *no preference* (which you can check on the Azure Portal), the standby will be provisioned in the same AZ. To configure zone-redundant high availability, perform a point-in-time restore of the server and enable HA on the restored server.
<!-- We continue to add more regions for flexible server. --> > [!NOTE]
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
description: This article shows how to create and connect to Azure Database for
+ Last updated 11/30/2021
postgresql Quickstart Create Server Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-arm-template.md
Title: 'Quickstart: Create an Azure DB for PostgresSQL Flexible Server - ARM tem
description: In this Quickstart, learn how to create an Azure Database for PostgresSQL Flexible server using ARM template. +
Create a _postgres-flexible-server-template.json_ file and copy the following JS
"defaultValue": "Standard_D4ds_v4", "type": "String" },
- "haEnabled": {
- "defaultValue": "Disabled",
- "type": "string"
- },
+ "haMode": {
+ "defaultValue": "ZoneRedundant",
+ "type": "string"
+ },
"availabilityZone": { "defaultValue": "1", "type": "String"
Create a _postgres-flexible-server-template.json_ file and copy the following JS
"delegatedSubnetResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), json(concat(parameters('virtualNetworkExternalId'), '/subnets/' , parameters('subnetName'))))]", "privateDnsZoneArmResourceId": "[if(empty(parameters('virtualNetworkExternalId')), json('null'), parameters('privateDnsZoneArmResourceId'))]" },
- "haEnabled": "[parameters('haEnabled')]",
+ "highAvailability": {
+ "mode": "[parameters('haMode')]"
+ },
"storage": { "storageSizeGB": "[parameters('skuSizeGB')]" },
postgresql Quickstart Create Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-cli.md
description: This quickstart describes how to use the Azure CLI to create an Azu
+ ms.devlang: azurecli Last updated 11/30/2021
postgresql Quickstart Create Server Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/quickstart-create-server-portal.md
description: Quickstart guide to creating and managing an Azure Database for Pos
+ Last updated 12/01/2021
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
+ Previously updated : 11/30/2021 Last updated : 02/28/2022 # Release notes - Azure Database for PostgreSQL - Flexible Server This page provides latest news and updates regarding feature additions, engine versions support, extensions, and any other announcements relevant for Flexible Server - PostgreSQL.
+## Release: February 2022
+* Support for [latest PostgreSQL minors](./concepts-supported-versions.md) 13.5, 12.7 and 11.12 with new server creates<sup>$</sup>.
+* Support for [US Gov regions](overview.md#azure-regions) - Arizona and Virginia
+* Support for [extensions](concepts-extensions.md) TimescaleDB, orafce, and pg_repack with new servers<sup>$</sup>
+* Extensions need to be [allow-listed](concepts-extensions.md#how-to-use-postgresql-extensions) before they can be installed.
+* Support for zone redundant high availability for new server creates in [regions](overview.md#azure-regions) Central India, Korea Central, East Asia, and West US 3.
+* Several bug fixes, stability, security, and performance improvements<sup>$</sup>.
+
+<sup>**$**</sup> New servers get these features automatically. In your existing servers, these features are enabled during your server's future maintenance window.
## Release: November 2021
postgresql Tutorial Django Aks Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-aks-database.md
Title: 'Tutorial: Deploy Django on AKS cluster with PostgreSQL Flexible Server by using Azure CLI' description: Learn how to quickly build and deploy Django on AKS with Azure Database for PostgreSQL - Flexible Server. +
postgresql Tutorial Django App Service Postgres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-django-app-service-postgres.md
description: Deploy Django app with App Serice and Azure Database for PostgreSQ
+ ms.devlang: azurecli Last updated 11/30/2021
postgresql Tutorial Webapp Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/tutorial-webapp-server-vnet.md
description: Quickstart guide to create Azure Database for PostgreSQL - Flexible
+ ms.devlang: azurecli Last updated 11/30/2021
postgresql Concepts Connection Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-connection-pool.md
through PgBouncer, follow these steps:
portal. 2. Enable the checkbox **PgBouncer connection strings**. (The listed connection strings will change.)-
- > [!IMPORTANT]
- >
- > If the checkbox does not exist, PgBouncer isn't enabled for your server
- > group yet. Managed PgBouncer is being rolled out to all [supported
- > regions](resources-regions.md). Once
- > enabled in a region, it'll be added to existing server groups in the
- > region during a [scheduled
- > maintenance](concepts-maintenance.md) event.
- 3. Update client applications to connect with the new string. ## Next steps
private-link Disable Private Endpoint Network Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/disable-private-endpoint-network-policy.md
The following examples describe how to disable and enable `PrivateEndpointNetwor
This section describes how to disable subnet private endpoint policies using Azure PowerShell. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to disable the policy. ```azurepowershell
-$net =@{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
+$SubnetName = "default"
+$VnetName = "myVNet"
+$RGName = "myResourceGroup"
-($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'default'}).PrivateEndpointNetworkPolicies = "Disabled"
-
-$vnet | Set-AzVirtualNetwork
+$virtualNetwork = Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RGName
+($virtualNetwork | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}).PrivateEndpointNetworkPolicies = "Disabled"
+$virtualNetwork | Set-AzVirtualNetwork
``` ### Enable network policy
$vnet | Set-AzVirtualNetwork
This section describes how to enable subnet private endpoint policies using Azure PowerShell. Use [Get-AzVirtualNetwork](/powershell/module/az.network/get-azvirtualnetwork) and [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to enable the policy. ```azurepowershell
-$net =@{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
-}
-$vnet = Get-AzVirtualNetwork @net
-
-($vnet | Select -ExpandProperty subnets | Where-Object {$_.Name -eq 'default'}).PrivateEndpointNetworkPolicies = "Enabled"
+$SubnetName = "default"
+$VnetName = "myVNet"
+$RGName = "myResourceGroup"
-$vnet | Set-AzVirtualNetwork
+$virtualNetwork= Get-AzVirtualNetwork -Name $VnetName -ResourceGroupName $RGName
+($virtualNetwork | Select -ExpandProperty subnets | Where-Object {$_.Name -eq $SubnetName}).PrivateEndpointNetworkPolicies = "Enabled"
+$virtualNetwork | Set-AzVirtualNetwork
``` ## Azure CLI
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| Azure SQL Database for MariaDB | Yes | - | - | | Azure SQL Database for MySQL | Yes | Yes | - | | Azure SQL Database for PostgreSQL | Yes | Yes | - |
-| Azure Synapse Analytics | Yes | Yes, RSA 3072-bit | - |
+| Azure Synapse Analytics | Yes | Yes, RSA 3072-bit, including Managed HSM | - |
| SQL Server Stretch Database | Yes | Yes, RSA 3072-bit | Yes | | Table Storage | Yes | Yes | Yes | | Azure Cosmos DB | Yes ([learn more](../../cosmos-db/database-security.md?tabs=sql-api)) | Yes ([learn more](../../cosmos-db/how-to-setup-cmk.md)) | - |
service-bus-messaging Service Bus Azure And Service Bus Queues Compared Contrasted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-azure-and-service-bus-queues-compared-contrasted.md
This section compares some of the fundamental queuing capabilities provided by S
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Ordering guarantee |**No** <br/><br>For more information, see the first note in the [Additional Information](#additional-information) section.</br> | **Yes - First-In-First-Out (FIFO)**<br/><br>(by using [message sessions](message-sessions.md)) |
-| Delivery guarantee |**At-Least-Once** |**At-Least-Once** (using PeekLock receive mode. It's the default) <br/><br/>**At-Most-Once** (using ReceiveAndDelete receive mode) <br/> <br/> Learn more about various [Receive modes](service-bus-queues-topics-subscriptions.md#receive-modes) |
-| Atomic operation support |**No** |**Yes**<br/><br/> |
-| Receive behavior |**Non-blocking**<br/><br/>(completes immediately if no new message is found) |**Blocking with or without a timeout**<br/><br/>(offers long polling, or the ["Comet technique"](https://go.microsoft.com/fwlink/?LinkId=613759))<br/><br/>**Non-blocking**<br/><br/>(using .NET managed API only) |
-| Push-style API |**No** |**Yes**<br/><br/>Our .NET, Java, JavaScript, and Go SDKs provide push-style API. |
-| Receive mode |**Peek & Lease** |**Peek & Lock**<br/><br/>**Receive & Delete** |
-| Exclusive access mode |**Lease-based** |**Lock-based** |
-| Lease/Lock duration |**30 seconds (default)**<br/><br/>**7 days (maximum)** (You can renew or release a message lease using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API.) |**30 seconds (default)**<br/><br/>You can renew the message lock for the same lock duration each time manually or use the automatic lock renewal feature where the client manages lock renewal for you. |
-| Lease/Lock precision |**Message level**<br/><br/>Each message can have a different timeout value, which you can then update as needed while processing the message, by using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API. |**Queue level**<br/><br/>(each queue has a lock precision applied to all of its messages, but the lock can be renewed as described in the previous row) |
-| Batched receive |**Yes**<br/><br/>(explicitly specifying message count when retrieving messages, up to a maximum of 32 messages) |**Yes**<br/><br/>(implicitly enabling a pre-fetch property or explicitly by using transactions) |
-| Batched send |**No** |**Yes**<br/><br/>(by using transactions or client-side batching) |
+| Ordering guarantee | No <br/><br>For more information, see the first note in the [Additional Information](#additional-information) section.</br> | Yes - First-In-First-Out (FIFO)<br/><br>(by using [message sessions](message-sessions.md)) |
+| Delivery guarantee |At-Least-Once |At-Least-Once (using PeekLock receive mode. It's the default) <br/><br/>At-Most-Once (using ReceiveAndDelete receive mode) <br/> <br/> Learn more about various [Receive modes](service-bus-queues-topics-subscriptions.md#receive-modes) |
+| Atomic operation support |No |Yes<br/><br/> |
+| Receive behavior |Non-blocking<br/><br/>(completes immediately if no new message is found) |Blocking with or without a timeout<br/><br/>(offers long polling, or the ["Comet technique"](https://go.microsoft.com/fwlink/?LinkId=613759))<br/><br/>Non-blocking<br/><br/>(using .NET managed API only) |
+| Push-style API |No |Yes<br/><br/>Our .NET, Java, JavaScript, and Go SDKs provide push-style API. |
+| Receive mode |Peek & Lease |Peek & Lock<br/><br/>Receive & Delete |
+| Exclusive access mode |Lease-based |Lock-based |
+| Lease/Lock duration |30 seconds (default)<br/><br/>7 days (maximum) (You can renew or release a message lease using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API.) |30 seconds (default)<br/><br/>You can renew the message lock for the same lock duration each time manually or use the automatic lock renewal feature where the client manages lock renewal for you. |
+| Lease/Lock precision |Message level<br/><br/>Each message can have a different timeout value, which you can then update as needed while processing the message, by using the [UpdateMessage](/dotnet/api/microsoft.azure.storage.queue.cloudqueue.updatemessage) API. |Queue level<br/><br/>(each queue has a lock precision applied to all of its messages, but the lock can be renewed as described in the previous row) |
+| Batched receive |Yes<br/><br/>(explicitly specifying message count when retrieving messages, up to a maximum of 32 messages) |Yes<br/><br/>(implicitly enabling a pre-fetch property or explicitly by using transactions) |
+| Batched send |No |Yes<br/><br/>(by using transactions or client-side batching) |
### Additional information * Messages in Storage queues are typically first-in-first-out, but sometimes they can be out of order. For example, when the visibility-timeout duration of a message expires because a client application crashed while processing a message. When the visibility timeout expires, the message becomes visible again on the queue for another worker to dequeue it. At that point, the newly visible message might be placed in the queue to be dequeued again.
This section compares advanced capabilities provided by Storage queues and Servi
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Scheduled delivery |**Yes** |**Yes** |
-| Automatic dead lettering |**No** |**Yes** |
-| Increasing queue time-to-live value |**Yes**<br/><br/>(via in-place update of visibility timeout) |**Yes**<br/><br/>(provided via a dedicated API function) |
-| Poison message support |**Yes** |**Yes** |
-| In-place update |**Yes** |**Yes** |
-| Server-side transaction log |**Yes** |**No** |
-| Storage metrics |**Yes**<br/><br/>**Minute Metrics** provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |**Yes**<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](monitor-service-bus-reference.md#message-metrics). |
-| State management |**No** |**Yes** (Active, Disabled, SendDisabled, ReceiveDisabled. For details on these states, see [Queue status](entity-suspend.md#queue-status)) |
-| Message autoforwarding |**No** |**Yes** |
-| Purge queue function |**Yes** |**No** |
-| Message groups |**No** |**Yes**<br/><br/>(by using messaging sessions) |
-| Application state per message group |**No** |**Yes** |
-| Duplicate detection |**No** |**Yes**<br/><br/>(configurable on the sender side) |
-| Browsing message groups |**No** |**Yes** |
-| Fetching message sessions by ID |**No** |**Yes** |
+| Scheduled delivery |Yes |Yes |
+| Automatic dead lettering |No |Yes |
+| Increasing queue time-to-live value |Yes<br/><br/>(via in-place update of visibility timeout) |Yes<br/><br/>(provided via a dedicated API function) |
+| Poison message support |Yes |Yes |
+| In-place update |Yes |Yes |
+| Server-side transaction log |Yes |No |
+| Storage metrics |Yes<br/><br/>Minute Metrics provides real-time metrics for availability, TPS, API call counts, error counts, and more. They're all in real time, aggregated per minute and reported within a few minutes from what just happened in production. For more information, see [About Storage Analytics Metrics](/rest/api/storageservices/fileservices/About-Storage-Analytics-Metrics). |Yes<br/><br/>For information about metrics supported by Azure Service Bus, see [Message metrics](monitor-service-bus-reference.md#message-metrics). |
+| State management |No |Yes (Active, Disabled, SendDisabled, ReceiveDisabled. For details on these states, see [Queue status](entity-suspend.md#queue-status)) |
+| Message autoforwarding |No |Yes |
+| Purge queue function |Yes |No |
+| Message groups |No |Yes<br/><br/>(by using messaging sessions) |
+| Application state per message group |No |Yes |
+| Duplicate detection |No |Yes<br/><br/>(configurable on the sender side) |
+| Browsing message groups |No |Yes |
+| Fetching message sessions by ID |No |Yes |
### Additional information * Both queuing technologies enable a message to be scheduled for delivery at a later time.
This section compares Storage queues and Service Bus queues from the perspective
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Maximum queue size |**500 TB**<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |**1 GB to 80 GB**<br/><br/>(defined upon creation of a queue and [enabling partitioning](service-bus-partitioning.md) ΓÇô see the ΓÇ£Additional InformationΓÇ¥ section) |
-| Maximum message size |**64 KB**<br/><br/>(48 KB when using **Base64** encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |**256 KB** or **100 MB**<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
-| Maximum message TTL |**Infinite** (api-version 2017-07-27 or later) |**TimeSpan.Max** |
-| Maximum number of queues |**Unlimited** |**10,000**<br/><br/>(per service namespace) |
-| Maximum number of concurrent clients |**Unlimited** |**5,000** |
+| Maximum queue size |500 TB<br/><br/>(limited to a [single storage account capacity](../storage/common/storage-introduction.md#queue-storage)) |1 GB to 80 GB<br/><br/>(defined upon creation of a queue and [enabling partitioning](service-bus-partitioning.md) ΓÇô see the ΓÇ£Additional InformationΓÇ¥ section) |
+| Maximum message size |64 KB<br/><br/>(48 KB when using Base64 encoding)<br/><br/>Azure supports large messages by combining queues and blobs ΓÇô at which point you can enqueue up to 200 GB for a single item. |256 KB or 100 MB<br/><br/>(including both header and body, maximum header size: 64 KB).<br/><br/>Depends on the [service tier](service-bus-premium-messaging.md). |
+| Maximum message TTL |Infinite (api-version 2017-07-27 or later) |TimeSpan.Max |
+| Maximum number of queues |Unlimited |10,000<br/><br/>(per service namespace) |
+| Maximum number of concurrent clients |Unlimited |5,000 |
### Additional information * Service Bus enforces queue size limits. The maximum queue size is specified when creating a queue. It can be between 1 GB and 80 GB. If the queue's size reaches this limit, additional incoming messages will be rejected and the caller receives an exception. For more information about quotas in Service Bus, see [Service Bus Quotas](service-bus-quotas.md).
This section compares the management features provided by Storage queues and Ser
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Management protocol |**REST over HTTP/HTTPS** |**REST over HTTPS** |
-| Runtime protocol |**REST over HTTP/HTTPS** |**REST over HTTPS**<br/><br/>**AMQP 1.0 Standard (TCP with TLS)** |
-| .NET API |**Yes**<br/><br/>(.NET Storage Client API) |**Yes**<br/><br/>(.NET Service Bus API) |
-| Native C++ |**Yes** |**Yes** |
-| Java API |**Yes** |**Yes** |
-| PHP API |**Yes** |**Yes** |
-| Node.js API |**Yes** |**Yes** |
-| Arbitrary metadata support |**Yes** |**No** |
-| Queue naming rules |**Up to 63 characters long**<br/><br/>(Letters in a queue name must be lowercase.) |**Up to 260 characters long**<br/><br/>(Queue paths and names are case-insensitive.) |
-| Get queue length function |**Yes**<br/><br/>(Approximate value if messages expire beyond the TTL without being deleted.) |**Yes**<br/><br/>(Exact, point-in-time value.) |
-| Peek function |**Yes** |**Yes** |
+| Management protocol | REST over HTTP/HTTPS | REST over HTTPS |
+| Runtime protocol | REST over HTTP/HTTPS | REST over HTTPS<br/><br/>AMQP 1.0 Standard (TCP with TLS) |
+| .NET API | Yes<br/><br/>(.NET Storage Client API) |Yes<br/><br/>(.NET Service Bus API) |
+| Native C++ | Yes | Yes |
+| Java API | Yes | Yes |
+| PHP API | Yes | Yes |
+| Node.js API | Yes | Yes |
+| Arbitrary metadata support | Yes | No |
+| Queue naming rules | Up to 63 characters long<br/><br/>(Letters in a queue name must be lowercase.) | Up to 260 characters long <br/><br/>(Queue paths and names are case-insensitive.) |
+| Get queue length function | Yes<br/><br/>(Approximate value if messages expire beyond the TTL without being deleted.) |Yes<br/><br/>(Exact, point-in-time value.) |
+| Peek function | Yes | Yes |
### Additional information * Storage queues provide support for arbitrary attributes that can be applied to the queue description, in the form of name/value pairs.
This section discusses the authentication and authorization features supported b
| Comparison Criteria | Storage queues | Service Bus queues | | | | |
-| Authentication |**Symmetric key** |**Symmetric key** |
-| Security model |Delegated access via SAS tokens. |SAS |
-| Identity provider federation |**Yes** |**Yes** |
+| Authentication | [Symmetric key](../storage/common/storage-account-keys-manage.md) and [Role-based access control (RBAC)](../storage/queues/assign-azure-role-data-access.md) |[Symmetric key](service-bus-authentication-and-authorization.md#shared-access-signature) and [Role-based access control (RBAC)](service-bus-authentication-and-authorization.md#azure-active-directory) |
+| Identity provider federation | Yes | Yes |
### Additional information
-* Every request to either of the queuing technologies must be authenticated. Public queues with anonymous access aren't supported. Using [SAS](service-bus-sas.md), you can address this scenario by publishing a write-only SAS, read-only SAS, or even a full-access SAS.
-* The authentication scheme provided by Storage queues involves the use of a symmetric key. This key is a hash-based Message Authentication Code (HMAC), computed with the SHA-256 algorithm and encoded as a **Base64** string. For more information about the respective protocol, see [Authentication for the Azure Storage Services](/rest/api/storageservices/fileservices/Authentication-for-the-Azure-Storage-Services). Service Bus queues support a similar model using symmetric keys. For more information, see [Shared Access Signature Authentication with Service Bus](service-bus-sas.md).
+* Every request to either of the queuing technologies must be authenticated. Public queues with anonymous access aren't supported.
+* Using shared access signature (SAS) authentication, you can create a shared access authorization rule on a queue that can give users a write-only, read-only, or full access. For more information, see [Azure Storage - SAS authentication](../storage/common/storage-sas-overview.md) and [Azure Service Bus - SAS authentication](service-bus-sas.md).
+* Both queues support authorizing access using Azure Active Directory (Azure AD). Authorizing users or applications using OAuth 2.0 token returned by Azure AD provides superior security and ease of use over shared access signatures (SAS). With Azure AD, there is no need to store the tokens in your code and risk potential security vulnerabilities. For more information, see [Azure Storage - Azure AD authentication](../storage/queues/assign-azure-role-data-access.md) and [Azure Service Bus - Azure AD authentication](service-bus-authentication-and-authorization.md#azure-active-directory).
## Conclusion By gaining a deeper understanding of the two technologies, you can make a more informed decision on which queue technology to use, and when. The decision on when to use Storage queues or Service Bus queues clearly depends on many factors. These factors may depend heavily on the individual needs of your application and its architecture.
spatial-anchors Setup Unity Project https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/how-tos/setup-unity-project.md
Before including the Azure Spatial Anchors SDK in your Unity project, be sure to
### Import ASA packages [!INCLUDE [Import Unity Packages](../../../includes/spatial-anchors-unity-import-packages.md)]
-### HoloLens only
+### Extra Configurations
+If you are developing for HoloLens or Android please follow the additional setup steps below
+
+# [HoloLens](#tab/ExtraConfigurationsHoloLens)
#### Configure your Unity project XR settings When developing MixedReality apps on HoloLens, you need to set the XR configuration in Unity. For more information, see [Setting up your XR configuration - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/xr-project-setup?tabs=openxr) and [Choosing a Unity version and XR plugin - Mixed Reality | Microsoft Docs](/windows/mixed-reality/develop/unity/choosing-unity-version).
Be sure to enable the following capabilities in your Unity project:
> [!WARNING] > Failure to enable the PrivateNetworkClientServer capability may lead to a failure to query anchors when the device is using a network that is configured to be private.
-### Android only: Configure the mainTemplate.gradle file
+# [Android](#tab/ExtraConfigurationsAndroid)
+Configure the mainTemplate.gradle file
1. Go to **Edit** > **Project Settings** > **Player**. 2. In the **Inspector Panel** for **Player Settings**, select the **Android** icon.
spatial-anchors Get Started Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-android.md
ms.devlang: azurecli
-# Quickstart: Create an Android app with Azure Spatial Anchors
+# Run the sample app: Android - Android Studio (Java or C++/NDK)
-This quickstart covers how to create an Android app using [Azure Spatial Anchors](../overview.md) in either Java or C++/NDK. Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARCore Android app that can save and recall a spatial anchor.
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for Android devices using Android Studio (Java or C++/NDK). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARCore Android app that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-hololens.md
ms.devlang: azurecli
-# Quickstart: Create a HoloLens app with Azure Spatial Anchors, in C++/WinRT and DirectX
-This quickstart covers how to create a HoloLens app using [Azure Spatial Anchors](../overview.md) in C++/WinRT and DirectX. Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have a HoloLens app that can save and recall a spatial anchor.
+# Run the sample app: HoloLens - Visual Studio (C++/WinRT)
+
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for HoloLens using Visual Studio (C++/WinRT and DirectX). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have a HoloLens app that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-ios.md
-# Quickstart: Create an iOS app with Azure Spatial Anchors, in either Swift or Objective-C
-This quickstart covers how to create an iOS app using [Azure Spatial Anchors](../overview.md) in either Swift or Objective-C. Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARKit iOS app that can save and recall a spatial anchor.
+# Run the sample app: iOS - Xcode (Swift or Objective-C)
+
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for iOS devices using Xcode (Swift or Objective-C). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARKit iOS app that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Unity Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-android.md
ms.devlang: azurecli
-# Quickstart: Create a Unity Android app with Azure Spatial Anchors
-This quickstart covers how to create a Unity Android app using [Azure Spatial Anchors](../overview.md). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARCore Android app built with Unity that can save and recall a spatial anchor.
+# Run the sample app: Android - Unity (C#)
+
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for Android devices using Unity (C#). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARCore Android app built with Unity that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Unity Hololens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-hololens.md
ms.devlang: azurecli
-# Quickstart: Create a Unity HoloLens app that uses Azure Spatial Anchors
+# Run the sample app: HoloLens - Unity (C#)
-In this quickstart, you'll create a Unity HoloLens app that uses [Azure Spatial Anchors](../overview.md). Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences with objects that persist their location across devices over time. When you're finished, you'll have a HoloLens app built with Unity that can save and recall a spatial anchor.
+In this quickstart, you'll run the [Azure Spatial Anchors](../overview.md) sample app for HoloLens using Unity (C#). Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences with objects that persist their location across devices over time. When you're finished, you'll have a HoloLens app built with Unity that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Unity Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-unity-ios.md
ms.devlang: azurecli
-# Quickstart: Create a Unity iOS app with Azure Spatial Anchors
+# Run the sample app: iOS - Unity (C#)
-This quickstart covers how to create a Unity iOS app using [Azure Spatial Anchors](../overview.md). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARKit iOS app built with Unity that can save and recall a spatial anchor.
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for iOS devices using Unity (C#). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an ARKit iOS app built with Unity that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Xamarin Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-android.md
ms.devlang: azurecli
-# Quickstart: Create a Xamarin Android app with Azure Spatial Anchors
+# Run the sample app: Android - Xamarin (C#)
-This quickstart covers how to create an Android app with Xamarin using [Azure Spatial Anchors](../overview.md). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an Android app that can save and recall a spatial anchor.
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for Android devices using Xamarin (C#). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an Android app that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Get Started Xamarin Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/quickstarts/get-started-xamarin-ios.md
ms.devlang: azurecli
-# Quickstart: Create a Xamarin iOS app with Azure Spatial Anchors
+# Run the sample app: iOS - Xamarin (C#)
-This quickstart covers how to create an iOS app with Xamarin using [Azure Spatial Anchors](../overview.md). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an iOS app that can save and recall a spatial anchor.
+This quickstart covers how to run the [Azure Spatial Anchors](../overview.md) sample app for iOS devices using Xamarin (C#). Azure Spatial Anchors is a cross-platform developer service that allows you to create mixed reality experiences using objects that persist their location across devices over time. When you're finished, you'll have an iOS app that can save and recall a spatial anchor.
You'll learn how to:
spatial-anchors Spatial Anchor Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/spatial-anchor-support.md
# Azure Spatial Anchors support options ## Open a tech support ticket-
+If you believe there is something wrong with the service please open a technical support ticket.
To open a technical support ticket within the Azure portal for Azure Spatial Anchors: 1. With the [Azure portal](https://azure.microsoft.com/account/) open, select the help icon from the top menu bar, then select the **Help + support** button.
To open a technical support ticket within the Azure portal for Azure Spatial Anc
![Azure portal support ticket fields](./media/spatial-anchor-support3.png) ## Team & community support
-### Azure Spatial Anchors general
-For support from the Spatial Anchors team and the user community, see [Azure Spatial Anchors Q&A](/answers/topics/azure-spatial-anchors.html).
- ### Azure Spatial Anchors samples If you are unable to run the samples, please file an issue in the [ASA samples repository](https://github.com/Azure/azure-spatial-anchors-samples/issues) by clicking _New issue_ then _Get started_
+### Azure Spatial Anchors general
+For support from the Spatial Anchors team and the user community, visit [Azure Spatial Anchors Q&A](/answers/topics/azure-spatial-anchors.html).
+
+### External Communities
+Additional Azure Spatial Anchors support platforms that are community driven can be found on [Slack](https://aka.ms/holodevelopers) and [Stack Overflow](https://stackoverflow.com/questions/tagged/azure-spatial-anchors)
+ ## Provide content article feedback At the bottom of each content article, there is an opportunity to open a GitHub issue and provide feedback on the Azure Spatial Anchor documentation content.
-## Provide product feedback
+## Provide product feedback & suggestions
To provide feedback, share an idea or suggestion for the Azure Spatial Anchors service, or vote on the ideas that others have submitted, visit the [Azure Spatial Anchors Feedback Forum](https://feedback.azure.com/d365community/forum/f47d9b25-0725-ec11-b6e6-000d3a4f07b8).
spatial-anchors Tutorial New Unity Hololens App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/tutorials/tutorial-new-unity-hololens-app.md
To complete this tutorial, make sure you have:
5. Press **Get Features** --> **Import** --> **Approve** --> **Exit** 6. When refocussing your Unity window, Unity will start importing the modules
-7. If you get a message about using the new input system, click **Yes** to restart Unity and enable the backends.
+7. If you get a message about using the new input system, select **Yes** to restart Unity and enable the backends.
### Set up the project settings
We'll now set some Unity project settings that help us target the Windows Hologr
1. Select **Player Settings...** 1. Select **XR Plug-in Management** 1. Make sure the **Universal Windows Platform Settings** tab is selected and check the box next to **OpenXR** and next to **Microsoft HoloLens feature group**
-1. Click on the yellow warning sign next to **OpenXR** to display all OpenXR issues.
+1. Select the yellow warning sign next to **OpenXR** to display all OpenXR issues.
1. Select **Fix all**
-1. To fix the issue "_At least one interaction profile must be added_", click on *Edit* to open the OpenXR Project settings. Then under **Interaction Profiles** select the **+** symbol and select **Microsoft Hand Interaction Profile**
+1. To fix the issue "_At least one interaction profile must be added_", select *Edit* to open the OpenXR Project settings. Then under **Interaction Profiles** select the **+** symbol and select **Microsoft Hand Interaction Profile**
![Unity - OpenXR Setup](../../../includes/media/spatial-anchors-unity/unity-hl2-openxr-setup.png) #### Change Quality Settings 1. Select **Edit** > **Project Settings** > **Quality**
-2. In the column under the **Universal Windows Platform** logo, click on the arrow at the **Default** row and select **Very Low**. You'll know the setting is applied correctly when the box in the **Universal Windows Platform** column and **Very Low** row is green.
+2. In the column under the **Universal Windows Platform** logo, select the arrow in the **Default** row and select **Very Low**. You'll know the setting is applied correctly when the box in the **Universal Windows Platform** column and **Very Low** row is green.
#### Set capabilities 1. Go to **Edit** > **Project Settings** > **Player** (you may still have it open from the previous step).
We'll now set some Unity project settings that help us target the Windows Hologr
1. In the **Hierarchy Panel**, select **Main Camera**. 2. In the **Inspector**, set its transform position to **0,0,0**. 3. Find the **Clear Flags** property, and change the dropdown from **Skybox** to **Solid Color**.
-4. Click on the **Background** field to open a color picker.
+4. Select the **Background** field to open a color picker.
5. Set **R, G, B, and A** to **0**.
-6. Click **Add Component** and add the **Tracked Pose Driver** Component to the camera
+6. Select **Add Component** at the bottom and add the **Tracked Pose Driver** Component to the camera
![Unity - Camera Setup](../../../includes/media/spatial-anchors-unity/unity-camera-setup.png) ## Try it out #1
spatial-anchors Unity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spatial-anchors/unity-overview.md
- Title: Azure Spatial Anchors Unity overview
-description: Learn how Azure Spatial Anchors can be used within Unity Apps. Review quickstarts for Unity for HoloLens, Unity for Android, and Unity for iOS.
---- Previously updated : 11/12/2021---
-# Building in Unity with Azure Spatial Anchors
-
-Developers can choose Unity for creating and deploying mixed reality applications that use Azure Spatial Anchors. If creating your own project, follow the [Unity project setup guide](./how-tos/setup-unity-project.md). Otherwise, you can get started quickly with one of the following Quickstarts:
-
-**Unity for HoloLens**
-
-[Quickstart: Create a Unity HoloLens app that uses Azure Spatial Anchors](./quickstarts/get-started-unity-hololens.md)
-
-**Unity for Android**
-
-[Quickstart: Create a Unity Android app that uses Azure Spatial Anchors](./quickstarts/get-started-unity-android.md)
-
-**Unity for iOS**
-
-[Quickstart: Create a Unity iOS app that uses Azure Spatial Anchors](./quickstarts/get-started-unity-ios.md)
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
az ad sp show --id 18218b12-1895-43e9-ad80-6e8fc1ea88ce --query objectId
OID will be displayed.
-When you have the correct OID for the service principal, go to the Storage Explorer **Manage Access** page to add the OID and assign appropriate permissions for the OID. Make sure you select **Save**.
+When you have the correct OID for the service principal, go to the Storage Explorer **Manage Access** page to add the OID and assign appropriate permissions for the OID. Make sure you select **Save**
### Can I set the ACL of a container?
The Azure Storage REST API does contain an operation named [Set Container ACL](/
- [POSIX Access Control Lists on Linux](https://www.linux.com/news/posix-acls-linux) - [HDFS permission guide](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html) - [POSIX FAQ](https://www.opengroup.org/austin/papers/posix_faq.html)-- [POSIX 1003.1 2008](https://standards.ieee.org/findstds/standard/1003.1-2008.html)
+- [POSIX 1003.1 2008](https://standards.ieee.org/wp-content/uploads/import/documents/interpretations/1003.1-2008_interp.pdf)
- [POSIX 1003.1 2013](https://pubs.opengroup.org/onlinepubs/9699919799.2013edition/) - [POSIX 1003.1 2016](https://pubs.opengroup.org/onlinepubs/9699919799.2016edition/) - [POSIX ACL on Ubuntu](https://help.ubuntu.com/community/FilePermissionsACLs)
storage Secure File Transfer Protocol Support How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support-how-to.md
To learn more about SFTP support for Azure Blob Storage, see [SSH File Transfer
- A standard general-purpose v2 or premium block blob storage account. You can also enable SFTP as create the account. For more information on these types of storage accounts, see [Storage account overview](../common/storage-account-overview.md). -- The account redundancy option of the storage account is set to either locally-redundant storage (LRS) or zone-redundant storage (ZRS).- - The hierarchical namespace feature of the account must be enabled. To enable the hierarchical namespace feature, see [Upgrade Azure Blob Storage with Azure Data Lake Storage Gen2 capabilities](upgrade-to-data-lake-storage-gen2-how-to.md). -- If you're connecting from an on-premises network, make sure that your client allows outgoing communication through port 22. The SFTP uses that port.
+- If you're connecting from an on-premises network, make sure that your client allows outgoing communication through port 22 used by SFTP.
## Register the feature
storage Customer Managed Keys Configure Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/customer-managed-keys-configure-key-vault.md
Previously updated : 01/28/2022 Last updated : 03/03/2022
This article shows how to configure encryption with customer-managed keys stored
## Configure a key vault
-You can use a new or existing key vault to store customer-managed keys. The Storage Account and Key Vault can be in different regions or subscriptions in the same tenant. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
+You can use a new or existing key vault to store customer-managed keys. The starage account and key vault may be in different regions or subscriptions in the same tenant. To learn more about Azure Key Vault, see [Azure Key Vault Overview](../../key-vault/general/overview.md) and [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md).
Using customer-managed keys with Azure Storage encryption requires that both soft delete and purge protection be enabled for the key vault. Soft delete is enabled by default when you create a new key vault and cannot be disabled. You can enable purge protection either when you create the key vault or after it is created.
storage Partner Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/primary-secondary-storage/partner-overview.md
Title: Storage partners for primary and secondary storage description: Microsoft partners who build customer solutions for primary and secondary storage solutions with Azure Storage- Last updated 05/12/2021
This article highlights Microsoft partner companies that deliver a network attac
| ![Nasuni](./media/nasuni-logo.png) |**Nasuni**<br>Nasuni is a file storage platform that replaces enterprise NAS and file servers including the associated infrastructure for BCDR and disk tiering. Virtual edge appliances keep files quickly accessible and synchronized with the cloud. The management console lets you manage multiple storage sites from one location including the ability to provision, monitor, control, and report on your file infrastructure. Continuous versioning to the cloud brings file restore times down to minutes.<br><br>Nasuni cloud file storage built on Azure eliminates traditional NAS and file servers across any number of locations and replaces it with a cloud solution. Nasuni cloud file storage provides infinite file storage, backups, disaster recovery, and multi-site file sharing. Nasuni is a software-as-a-service used for data-center-to-the-cloud initiatives, multi-location file synching, sharing and collaboration, and as a cloud storage companion for VDI environments.|[Partner page](https://www.nasuni.com/partner/microsoft/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/nasunicorporation.nasuni)| | ![Panzura](./media/panzura-logo.png) |**Panzura**<br>Panzura is the fabric that transforms Azure cloud storage into a high-performance global file system. By delivering one authoritative data source for all users, Panzura allows enterprises to use Azure as a globally available data center, with all the functionality and speed of a single-site NAS, including automatic file locking, immediate global data consistency, and local file operation performance. |[Partner page](https://panzura.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/panzura-file-system.panzura-freedom-filer)| | ![Pure Storage](./media/pure-logo.png) |**Pure Storage**<br>Pure delivers a modern data experience that empowers organizations to run their operations as a true, automated, storage as-a-service model seamlessly across multiple clouds.|[Partner page](https://www.purestorage.com/company/technology-partners/microsoft.html)<br>[Solution Video](https://azure.microsoft.com/resources/videos/pure-storage-overview)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/purestoragemarketplaceadmin.pure_storage_cloud_block_store_deployment?tab=Overview)|
+| ![Qumulo](./media/qumulo-logo.png)|**Qumulo**<br>Qumulo is a fast, scalable, and simple to use file system which makes it easy to store, manage, and run applications that use file data at scale on Microsoft Azure. Qumulo on Azure offers multiple petabytes (PB) of storage capacity and up to 20 GB/s of performance per file system. Windows (SMB) and Linux (NFS) are both natively supported. Patented software architecture delivers a low per-terabyte (TB) cost Media & Entertainment, Genomics, Technology, Natural Resources, and Finance companies all run their most demanding workloads on Qumulo in the cloud. With a Net Promoter Score of 89, customers use Qumulo for its scale, performance and ease of use capabilities like real-time visual insights into how storage is used and award winning Slack based support. Sign up for a free POC today through [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview) or [Qumulo.com](https://qumulo.com/). | [Partner page](https://qumulo.com/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas?tab=Overview)<br>[Datasheet](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWUtF0)|
| ![Scality](./media/scality-logo.png) |**Scality**<br>Scality builds a software-defined file and object platform designed for on-premise, hybrid, and multi-cloud environments. ScalityΓÇÖs integration with Azure Blob Storage enable enterprises to manage and secure their data between on-premises environments and Azure, and meet the demand of high-performance, cloud-based file workloads. |[Partner page](https://www.scality.com/partners/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/scality.scalityconnecthourly?tab=Overview)| | ![Tiger Technology company logo](./media/tiger-logo.png) |**Tiger Technology**<br>Tiger Technology offers high-performance, secure, data management software solutions. Tiger Technology enables organizations of any size to manage their digital assets on-premises, in any public cloud, or through a hybrid model. <br><br> Tiger Bridge is a non-proprietary, software-only data, and storage management system. It blends on-premises and multi-tier cloud storage into a single space, and enables hybrid workflows. This transparent file server extension lets you benefit from Azure scale and services, while preserving legacy applications and workflows. Tiger Bridge addresses several data management challenges, including: file server extension, disaster recovery, cloud migration, backup and archive, remote collaboration, and multi-site sync. It also offers continuous data protection. |[Partner page](https://www.tiger-technology.com/partners/microsoft-azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/tiger-technology.tigerbridge_vm)| | ![XenData company logo](./media/xendata-logo.png) |**XenData**<br>XenData software creates multi-tier storage systems that manage files and folders across on-premises storage and Azure Blob Storage. XenData Multi-Site Sync software creates a global file system for distributed teams, enabling them to share and synchronize files across multiple locations. XenData cloud solutions are optimized for video files, supporting video streaming and partial file restore. They are integrated with many complementary software products used in the Media and Entertainment industry and support a variety of workflows. Other industries and applications that use XenData solutions include Oil and Gas, Engineering and Scientific Data, Video Surveillance and Medical Imaging. |[Partner page](https://xendata.com/tech_partners_cloud/azure/)<br>[Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/xendata-inc.sol-15118-gyy?tab=Overview)|
virtual-machines Nva10v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nva10v5-series.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-The NVadsA10v5-series virtual machines are powered by [NVIDIA A10](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) GPUs and AMD EPYC 74F3V(Milan) CPUs with a base frequency of 3.4 GHz, all-cores peak frequency of 4.0 GHz. With NVadsA10v5-series Azure is introducing virtual machines with partial NVIDIA GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/6th of a GPU with 4-GiB frame buffer to a full A10 GPU with 24-GiB frame buffer.
+The NVadsA10v5-series virtual machines are powered by [NVIDIA A10](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) GPUs and AMD EPYC 74F3V(Milan) CPUs with a base frequency of 3.2 GHz, all-cores peak frequency of 4.0 GHz. With NVadsA10v5-series Azure is introducing virtual machines with partial NVIDIA GPUs. Pick the right sized virtual machine for GPU accelerated graphics applications and virtual desktops starting at 1/6th of a GPU with 4-GiB frame buffer to a full A10 GPU with 24-GiB frame buffer.
The preview is currenty availabe in US South Central and West Europe regions.[Sign up for preview](https://aka.ms/AzureNVadsA10v5Preview) to get early access to the NVadsA10v5-series.
virtual-machines Oracle Database Quick Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-quick-create.md
Last updated 10/05/2020 -+ ms.devlang: azurecli
az vm disk attach --name oradata01 --new --resource-group rg-oracle --size-gb 64
``` ## Open ports for connectivity
-In this task you must configure some external endpoints for the database listener and EM Express to use by setting up the Azure Network Security Group that protects the VM.
+In this task you must configure some external endpoints for the database listener to use by setting up the Azure Network Security Group that protects the VM.
1. To open the endpoint that you use to access the Oracle database remotely, create a Network Security Group rule as follows: ```bash
In this task you must configure some external endpoints for the database listene
--priority 1001 ^ --destination-port-range 1521 ```
-2. To open the endpoint that you use to access Oracle EM Express remotely, create a Network Security Group rule with az network nsg rule create as follows:
+2. To open the endpoint that you use to access Oracle remotely, create a Network Security Group rule with az network nsg rule create as follows:
```bash az network nsg rule create ^ --resource-group rg-oracle ^
The Oracle software is already installed on the Marketplace image. Create a samp
echo "export ORACLE_SID=oratest1" >> ~oracle/.bashrc ```
-## Oracle EM Express connectivity
-
-For a GUI management tool that you can use to explore the database, set up Oracle EM Express. To connect to Oracle EM Express, you must first set up the port in Oracle.
-
-1. Connect to your database using sqlplus:
-
- ```bash
- sqlplus sys as sysdba
- ```
-
-2. Once connected, set the port 5502 for EM Express
-
- ```bash
- exec DBMS_XDB_CONFIG.SETHTTPSPORT(5502);
- ```
-
-3. Connect EM Express from your browser. Make sure your browser is compatible with EM Express (Flash install is required):
-
- ```https
- https://<VM ip address or hostname>:5502/em
- ```
-
- You can log in by using the **SYS** account, and check the **as sysdba** checkbox. Use the password **OraPasswd1** that you set during installation.
-
- ![Screenshot of the Oracle OEM Express login page](./media/oracle-quick-start/oracle_oem_express_login.png)
- ## Automate database startup and shutdown The Oracle database by default doesn't automatically start when you restart the VM. To set up the Oracle database to start automatically, first sign in as root. Then, create and update some system files.
virtual-wan About Virtual Hub Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/about-virtual-hub-routing.md
Consider the following when configuring Virtual WAN routing:
* When using Azure Firewall in multiple regions, all spoke virtual networks must be associated to the same route table. For example, having a subset of the VNets going through the Azure Firewall while other VNets bypass the Azure Firewall in the same virtual hub is not possible. * You may specify multiple next hop IP addresses on a single Virtual Network connection. However, Virtual Network Connection does not support ΓÇÿmultiple/uniqueΓÇÖ next hop IP to the ΓÇÿsameΓÇÖ network virtual appliance in a SPOKE Virtual Network 'if' one of the routes with next hop IP is indicated to be public IP address or 0.0.0.0/0 (internet) * All information pertaining to 0.0.0.0/0 route is confined to a local hub's route table. This route does not propagate across hubs.
+* You can only use Virtual WAN to program routes in a spoke if the prefix is shorter (less specific) than the virtual network prefix. For example, in the diagram above the spoke VNET1 has the prefix 10.1.0.0/16: in this case, Virtual WAN would not be able to inject a route that matches the virtual network prefix (10.1.0.0/16) or any of the subnets (10.1.0.0/24, 10.1.1.0/24). In other words, Virtual WAN cannot attract traffic between two subnets that are in the same virtual network.
+ ## Next steps * To configure routing, see [How to configure virtual hub routing](how-to-virtual-hub-routing.md).