Updates from: 03/17/2022 02:09:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Previously updated : 03/03/2022 Last updated : 03/31/2022
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
&response_type=code &redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob &response_mode=query
-&scope=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6%20offline_access
+&scope=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6%20offline_access%20https://{tenant-name}/{app-id-uri}/{scope}
&state=arbitrary_data_you_can_receive_in_the_response &code_challenge=YTFjNjI1OWYzMzA3MTI4ZDY2Njg5M2RkNmVjNDE5YmEyZGRhOGYyM2IzNjdmZWFhMTQ1ODg3NDcxY2Nl &code_challenge_method=S256
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
Previously updated : 04/03/2022 Last updated : 03/31/2022
Open your web app in a code editor such as Visual Studio Code. Under the project
||| |`APP_CLIENT_ID`|The **Application (client) ID** for the web app you registered in [step 2.1](#step-2-register-a-web-application). | |`APP_CLIENT_SECRET`|The client secret for the web app you created in [step 2.2](#step-22-create-a-web-app-client-secret) |
-|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi_node_app`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). |
+|`SIGN_UP_SIGN_IN_POLICY_AUTHORITY`|The **Sign in and sign up** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`. Learn how to [Get your tenant name](tenant-management.md#get-your-tenant-name). |
|`RESET_PASSWORD_POLICY_AUTHORITY`| The **Reset password** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<reset-password-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your Reset password user flow such as `B2C_1_reset_password_node_app`.| |`EDIT_PROFILE_POLICY_AUTHORITY`|The **Profile editing** user flow authority such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<profile-edit-user-flow-name>`. Replace `<your-tenant-name>` with the name of your tenant and `<reset-password-user-flow-name>` with the name of your reset password user flow such as `B2C_1_edit_profile_node_app`. | |`AUTHORITY_DOMAIN`| The Azure AD B2C authority domain such as `https://<your-tenant-name>.b2clogin.com`. Replace `<your-tenant-name>` with the name of your tenant.| |`APP_REDIRECT_URI`| The application redirect URI where Azure AD B2C will return authentication responses (tokens). It matches the **Redirect URI** you set while registering your app in Azure portal, and it must be publicly accessible. Leave the value as is.|
-|`LOGOUT_ENDPOINT`| The Azure AD B2C sign out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi_node_app`.|
+|`LOGOUT_ENDPOINT`| The Azure AD B2C sign out endpoint such as `https://<your-tenant-name>.b2clogin.com/<your-tenant-name>.onmicrosoft.com/<sign-in-sign-up-user-flow-name>/oauth2/v2.0/logout?post_logout_redirect_uri=http://localhost:3000`. Replace `<your-tenant-name>` with the name of your tenant and `<sign-in-sign-up-user-flow-name>` with the name of your Sign in and Sign up user flow such as `B2C_1_susi`.|
Your final configuration file should look like the following sample:
You can now test the sample app. You need to start the Node server and access it
### Test sign in
-1. After the page with the **Sign in** button finishes loading, select **Sign in**. You're prompted to sign in.
+1. After the page with the **Sign in** button completes loading, select **Sign in**. You're prompted to sign in.
1. Enter your sign-in credentials, such as email address and password. If you don't have an account, select **Sign up now** to create an account. After you successfully sign in or sign up, you should see the following page that shows sign-in status. :::image type="content" source="./media/configure-a-sample-node-web-app/tutorial-dashboard-page.png" alt-text="Screenshot shows web app sign-in status.":::
active-directory-b2c Direct Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/direct-signin.md
Title: Set up direct sign-in using Azure Active Directory B2C
-description: Learn how to prepopulate the sign-in name or redirect straight to a social identity provider.
+ Title: Set up direct sign in using Azure Active Directory B2C
+description: Learn how to prepopulate the sign in name or redirect straight to a social identity provider.
Previously updated : 12/14/2020 Last updated : 03/31/2022 zone_pivot_groups: b2c-policy-type
-# Set up direct sign-in using Azure Active Directory B2C
+# Set up direct sign in using Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-When setting up sign-in for your application using Azure Active Directory (AD) B2C, you can prepopulate the sign-in name or direct sign-in to a specific social identity provider, such as Facebook, LinkedIn, or a Microsoft account.
+When you set up sign-in for your application using Azure Active Directory B2C (Azure AD B2C), you can prepopulate the sign-in name or directly sign in to a specific social identity provider, such as Facebook, LinkedIn, or a Microsoft account.
-## Prepopulate the sign-in name
+## Prepopulate the sign in name
During a sign-in user journey, a relying party application may target a specific user or domain name. When targeting a user, an application can specify, in the authorization request, the `login_hint` query parameter with the user sign-in name. Azure AD B2C automatically populates the sign-in name, while the user only needs to provide the password.
The user is able to change the value in the sign-in textbox.
::: zone pivot="b2c-custom-policy"
-To support login hint parameter, override the `SelfAsserted-LocalAccountSignin-Email` technical profile. In the `<InputClaims>` section, set the DefaultValue of the signInName claim to `{OIDC:LoginHint}`. The `{OIDC:LoginHint}` variable contains the value of the `login_hint` parameter. Azure AD B2C reads the value of the signInName claim and pre-populates the signInName textbox.
+To support sign in hint parameter, override the `SelfAsserted-LocalAccountSignin-Email` technical profile. In the `<InputClaims>` section, set the DefaultValue of the signInName claim to `{OIDC:LoginHint}`. The `{OIDC:LoginHint}` variable contains the value of the `login_hint` parameter. Azure AD B2C reads the value of the signInName claim and pre-populates the signInName textbox.
```xml <ClaimsProvider>
To support login hint parameter, override the `SelfAsserted-LocalAccountSignin-E
::: zone-end
-## Redirect sign-in to a social provider
+## Redirect sign in to a social provider
-If you configured the sign-in journey for your application to include social accounts, such as Facebook, LinkedIn, or Google, you can specify the `domain_hint` parameter. This query parameter provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. For example, if the application specifies `domain_hint=facebook.com`, sign-in goes directly to the Facebook sign-in page.
+If you configured the sign-in journey for your application to include social accounts, such as Facebook, LinkedIn, or Google, you can specify the `domain_hint` parameter. This query parameter provides a hint to Azure AD B2C about the social identity provider that should be used for sign-in. For example, if the application specifies `domain_hint=facebook.com`, sign in goes directly to the Facebook sign in page.
![Sign up sign in page with domain_hint query param highlighted in URL](./media/direct-signin/domain-hint.png)
To support domain hint parameter, you can configure the domain name using the `<
... ``` -
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
Title: Single-page sign-in using implicit flow
+ Title: Single-page application sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
-description: Learn how to add single-page sign-in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
+description: Learn how to add single-page sign in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.
Previously updated : 07/19/2019 Last updated : 03/31/2022
-# Single-page sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
+# Single-page application sign in using the OAuth 2.0 implicit flow in Azure Active Directory B2C
Many modern applications have a single-page app (SPA) front end that is written primarily in JavaScript. Often, the app is written by using a framework like React, Angular, or Vue.js. SPAs and other JavaScript apps that run primarily in a browser have some additional challenges for authentication: - The security characteristics of these apps are different from traditional server-based web applications.+ - Many authorization servers and identity providers do not support cross-origin resource sharing (CORS) requests.+ - Full-page browser redirects away from the app can be invasive to the user experience. The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md).
-Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure Active Directory (Azure AD) authorize endpoint, without any server-to-server exchange. All authentication logic and session handling is done entirely in the JavaScript client with either a page redirect or a pop-up box.
+Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-core), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling is done entirely in the JavaScript client with either a page redirect or a pop-up box.
-Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign-up, sign-in, and profile management user flows. In the example HTTP requests in this article, **{tenant}.onmicrosoft.com** is used as an example. Replace `{tenant}` with the name of your tenant if you have one and have also created a user flow.
+Azure AD B2C extends the standard OAuth 2.0 implicit flow to more than simple authentication and authorization. Azure AD B2C introduces the [policy parameter](user-flow-overview.md). With the policy parameter, you can use OAuth 2.0 to add policies to your app, such as sign up, sign in, and profile management user flows. In the example HTTP requests in this article, we use **{tenant}.onmicrosoft.com** for illustration. Replace `{tenant}` with [the name of your tenant](tenant-management.md#get-your-tenant-name) if you have one. Also, you need to have [created a user flow](tutorial-create-user-flows.md?pivots=b2c-user-flow).
-The implicit sign-in flow looks something like the following figure. Each step is described in detail later in the article.
+We use the following figure to illustrate implicit sign in flow. Each step is described in detail later in the article.
![Swimlane-style diagram showing the OpenID Connect implicit flow](./media/implicit-flow-single-page-application/convergence_scenarios_implicit.png) ## Send authentication requests
-When your web application needs to authenticate the user and run a user flow, it can direct the user to the `/authorize` endpoint. The user takes action depending on the user flow.
+When your web application needs to authenticate the user and run a user flow, it directs the user to the Azure AD B2C's `/authorize` endpoint. The user takes action depending on the user flow.
+
+In this request, the client indicates the permissions that it needs to acquire from the user in the `scope` parameter and the user flow to run. To get a feel for how the request works, try pasting the request into a browser and running it. Replace:
+
+- `{tenant}` with the name of your Azure AD B2C tenant.
-In this request, the client indicates the permissions that it needs to acquire from the user in the `scope` parameter and the user flow to run. To get a feel for how the request works, try pasting the request into a browser and running it. Replace `{tenant}` with the name of your Azure AD B2C tenant. Replace `90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6` with the app ID of the application you've previously registered in your tenant. Replace `{policy}` with the name of a policy you've created in your tenant, for example `b2c_1_sign_in`.
+- `90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6` with the app ID of the application you've registered in your tenant.
+
+- `{policy}` with the name of a policy you've created in your tenant, for example `b2c_1_sign_in`.
```http GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/authorize?
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
&state=arbitrary_data_you_can_receive_in_the_response &nonce=12345 ```
+The parameters in the HTTP GET request are explained in the table below.
| Parameter | Required | Description | | | -- | -- | |{tenant}| Yes | Name of your Azure AD B2C tenant| |{policy}| Yes| The user flow to be run. Specify the name of a user flow you've created in your Azure AD B2C tenant. For example: `b2c_1_sign_in`, `b2c_1_sign_up`, or `b2c_1_edit_profile`. | | client_id | Yes | The application ID that the [Azure portal](https://portal.azure.com/) assigned to your application. |
-| response_type | Yes | Must include `id_token` for OpenID Connect sign-in. It also can include the response type `token`. If you use `token`, your app can immediately receive an access token from the authorize endpoint, without making a second request to the authorize endpoint. If you use the `token` response type, the `scope` parameter must contain a scope that indicates which resource to issue the token for. |
-| redirect_uri | No | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |
+| response_type | Yes | Must include `id_token` for OpenID Connect sign in. It can also include the response type `token`. If you use `token`, your app can immediately receive an access token from the authorize endpoint, without making a second request to the authorize endpoint. If you use the `token` response type, the `scope` parameter must contain a scope that indicates which resource to issue the token for. |
+| redirect_uri | No | The redirect URI of your app, where authentication responses can be sent and received by your app. It must exactly match one of the redirect URIs that you added to a registered application in the portal, except that it must be URL-encoded. |
| response_mode | No | Specifies the method to use to send the resulting token back to your app. For implicit flows, use `fragment`. | | scope | Yes | A space-separated list of scopes. A single scope value indicates to Azure AD both of the permissions that are being requested. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web apps. It indicates that your app needs a refresh token for long-lived access to resources. |
-| state | No | A value included in the request that also is returned in the token response. It can be a string of any content that you want to use. Usually, a randomly generated, unique value is used, to prevent cross-site request forgery attacks. The state is also used to encode information about the user's state in the app before the authentication request occurred, like the page they were on. |
+| state | No | A value included in the request that also is returned in the token response. It can be a string of any content that you want to use. Usually, a randomly generated, unique value is used, to prevent cross-site request forgery attacks. The state is also used to encode information about the user's state in the app before the authentication request occurred, for example, the page the user was on, or the user flow that was being executed. |
| nonce | Yes | A value included in the request (generated by the app) that is included in the resulting ID token as a claim. The app can then verify this value to mitigate token replay attacks. Usually, the value is a randomized, unique string that can be used to identify the origin of the request. | | prompt | No | The type of user interaction that's required. Currently, the only valid value is `login`. This parameter forces the user to enter their credentials on that request. Single sign-on doesn't take effect. |
-At this point, the user is asked to complete the policy's workflow. The user might have to enter their username and password, sign in with a social identity, sign up for the directory, or any other number of steps. User actions depend on how the user flow is defined.
+This is the interactive part of the flow. The user is asked to complete the policy's workflow. The user might have to enter their username and password, sign in with a social identity, sign up for a local account, or any other number of steps. User actions depend on how the user flow is defined.
-After the user completes the user flow, Azure AD B2C returns a response to your app at the value you used for `redirect_uri`. It uses the method specified in the `response_mode` parameter. The response is exactly the same for each of the user action scenarios, independent of the user flow that was executed.
+After the user completes the user flow, Azure AD B2C returns a response to your app via the `redirect_uri`. It uses the method specified in the `response_mode` parameter. The response is exactly the same for each of the user action scenarios, independent of the user flow that was executed.
### Successful response A successful response that uses `response_mode=fragment` and `response_type=id_token+token` looks like the following, with line breaks for legibility:
access_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Ik5HVEZ2ZEstZnl0aEV1Q..
| Parameter | Description | | | -- |
-| access_token | The access token that the app requested. |
-| token_type | The token type value. The only type that Azure AD supports is Bearer. |
+| access_token | The access token that the app requested from Azure AD B2C.|
+| token_type | The token type value. The only type that Azure AD B2C supports is Bearer. |
| expires_in | The length of time that the access token is valid (in seconds). | | scope | The scopes that the token is valid for. You also can use scopes to cache tokens for later use. | | id_token | The ID token that the app requested. You can use the ID token to verify the user's identity and begin a session with the user. For more information about ID tokens and their contents, see the [Azure AD B2C token reference](tokens-overview.md). |
Receiving an ID token is not enough to authenticate the user. Validate the ID to
Many open-source libraries are available for validating JWTs, depending on the language you prefer to use. Consider exploring available open-source libraries rather than implementing your own validation logic. You can use the information in this article to help you learn how to properly use those libraries.
-Azure AD B2C has an OpenID Connect metadata endpoint. An app can use the endpoint to fetch information about Azure AD B2C at runtime. This information includes endpoints, token contents, and token signing keys. There is a JSON metadata document for each user flow in your Azure AD B2C tenant. For example, the metadata document for the b2c_1_sign_in user flow in the fabrikamb2c.onmicrosoft.com tenant is located at:
+Azure AD B2C has an OpenID Connect metadata endpoint. An app can use the endpoint to fetch information about Azure AD B2C at runtime. This information includes endpoints, token contents, and token signing keys. There is a JSON metadata document for each user flow in your Azure AD B2C tenant. For example, the metadata document for a user flow named `b2c_1_sign_in` in a `fabrikamb2c.onmicrosoft.com` tenant is located at:
```http https://fabrikamb2c.b2clogin.com/fabrikamb2c.onmicrosoft.com/b2c_1_sign_in/v2.0/.well-known/openid-configuration
One of the properties of this configuration document is the `jwks_uri`. The valu
https://fabrikamb2c.b2clogin.com/fabrikamb2c.onmicrosoft.com/b2c_1_sign_in/discovery/v2.0/keys ```
-To determine which user flow was used to sign an ID token (and where to fetch the metadata from), you have two options:
+To determine which user flow was used to sign an ID token (and where to fetch the metadata from), you can use any of following options:
+ - The user flow name is included in the `acr` claim in `id_token`. For information about how to parse the claims from an ID token, see the [Azure AD B2C token reference](tokens-overview.md). -- Encode the user flow in the value of the `state` parameter when you issue the request. Then, decode the `state` parameter to determine which user flow was used. Either method is valid.+
+- Encode the user flow in the value of the `state` parameter when you issue the request. Then, decode the `state` parameter to determine which user flow was used.
After you've acquired the metadata document from the OpenID Connect metadata endpoint, you can use the RSA-256 public keys (located at this endpoint) to validate the signature of the ID token. There might be multiple keys listed at this endpoint at any given time, each identified by a `kid`. The header of `id_token` also contains a `kid` claim. It indicates which of these keys was used to sign the ID token. For more information, including learning about [validating tokens](tokens-overview.md), see the [Azure AD B2C token reference](tokens-overview.md). <!--TODO: Improve the information on this-->
After you've acquired the metadata document from the OpenID Connect metadata end
After you validate the signature of the ID token, several claims require verification. For example: * Validate the `nonce` claim to prevent token replay attacks. Its value should be what you specified in the sign-in request.+ * Validate the `aud` claim to ensure that the ID token was issued for your app. Its value should be the application ID of your app.+ * Validate the `iat` and `exp` claims to ensure that the ID token has not expired. Several more validations that you should perform are described in detail in the [OpenID Connect Core Spec](https://openid.net/specs/openid-connect-core-1_0.html). You might also want to validate additional claims, depending on your scenario. Some common validations include: * Ensuring that the user or organization has signed up for the app.+ * Ensuring that the user has proper authorization and privileges.+ * Ensuring that a certain strength of authentication has occurred, such as by using Azure AD Multi-Factor Authentication. For more information about the claims in an ID token, see the [Azure AD B2C token reference](tokens-overview.md).
For more information about the claims in an ID token, see the [Azure AD B2C toke
After you have validated the ID token, you can begin a session with the user. In your app, use the claims in the ID token to obtain information about the user. This information can be used for display, records, authorization, and so on. ## Get access tokens
-If the only thing your web apps needs to do is execute user flows, you can skip the next few sections. The information in the following sections is applicable only to web apps that need to make authenticated calls to a web API, and which are protected by Azure AD B2C.
+
+If the only thing your web apps needs to do is execute user flows, you can skip the next few sections. The information in the following sections is applicable only to web apps that need to make authenticated calls to a web API that is protected by Azure AD B2C itself.
Now that you've signed the user into your SPA, you can get access tokens for calling web APIs that are secured by Azure AD. Even if you have already received a token by using the `token` response type, you can use this method to acquire tokens for additional resources without redirecting the user to sign in again.
client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6
| login_hint |Required |To refresh and get tokens in a hidden iframe, include the username of the user in this hint to distinguish between multiple sessions the user might have at a given time. You can extract the username from an earlier sign-in by using the `preferred_username` claim (the `profile` scope is required in order to receive the `preferred_username` claim). | | domain_hint |Required |Can be `consumers` or `organizations`. For refreshing and getting tokens in a hidden iframe, include the `domain_hint` value in the request. Extract the `tid` claim from the ID token of an earlier sign-in to determine which value to use (the `profile` scope is required in order to receive the `tid` claim). If the `tid` claim value is `9188040d-6c67-4c5b-b112-36a304b66dad`, use `domain_hint=consumers`. Otherwise, use `domain_hint=organizations`. |
-By setting the `prompt=none` parameter, this request either succeeds or fails immediately, and returns to your application. A successful response is sent to your app at the indicated redirect URI, by using the method specified in the `response_mode` parameter.
+By setting the `prompt=none` parameter, this request either succeeds or fails immediately, and returns to your application. A successful response is sent to your app via the redirect URI, by using the method specified in the `response_mode` parameter.
### Successful response A successful response by using `response_mode=fragment` looks like this example:
If you receive this error in the iframe request, the user must interactively sig
## Refresh tokens ID tokens and access tokens both expire after a short period of time. Your app must be prepared to refresh these tokens periodically. Implicit flows do not allow you to obtain a refresh token due to security reasons. To refresh either type of token, use the implicit flow in a hidden HTML iframe element. In the authorization request include the `prompt=none` parameter. To receive a new id_token value, be sure to use `response_type=id_token` and `scope=openid`, and a `nonce` parameter.
-## Send a sign-out request
-When you want to sign the user out of the app, redirect the user to Azure AD to sign out. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD.
+## Send a sign out request
+
+When you want to sign the user out of the app, redirect the user to Azure AD B2C's sign out endpoint. You can then clear the user's session in the app. If you don't redirect the user, they might be able to reauthenticate to your app without entering their credentials again because they have a valid single sign-on session with Azure AD B2C.
You can simply redirect the user to the `end_session_endpoint` that is listed in the same OpenID Connect metadata document described in [Validate the ID token](#validate-the-id-token). For example:
GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/
| Parameter | Required | Description | | | -- | -- |
-| {tenant} | Yes | Name of your Azure AD B2C tenant |
-| {policy} | Yes | The user flow that you want to use to sign the user out of your application. |
+| {tenant} | Yes | Name of your Azure AD B2C tenant. |
+| {policy} | Yes | The user flow that you want to use to sign the user out of your application. This needs to be the same user flow that the app used to sign the user in. |
| post_logout_redirect_uri | No | The URL that the user should be redirected to after successful sign out. If it isn't included, Azure AD B2C shows the user a generic message. | | state | No | If a `state` parameter is included in the request, the same value should appear in the response. The application should verify that the `state` values in the request and response are identical. | > [!NOTE]
-> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign-in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it does not necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
+> Directing the user to the `end_session_endpoint` clears some of the user's single sign-on state with Azure AD B2C. However, it doesn't sign the user out of the user's social identity provider session. If the user selects the same identity provider during a subsequent sign in, the user is re-authenticated, without entering their credentials. If a user wants to sign out of your Azure AD B2C application, it does not necessarily mean they want to completely sign out of their Facebook account, for example. However, for local accounts, the user's session will be ended properly.
> ## Next steps
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Previously updated : 10/02/2020 Last updated : 03/31/2022
The following tables provide links to samples for applications including iOS, An
| [dotnet-webapp-and-webapi](https://github.com/Azure-Samples/active-directory-b2c-dotnet-webapp-and-webapi) | A combined sample for a .NET web application that calls a .NET Web API, both secured using Azure AD B2C. | | [dotnetcore-webapp-openidconnect](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C) | An ASP.NET Core web application that uses OpenID Connect to sign in users in Azure AD B2C. | | [dotnetcore-webapp-msal-api](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/4-WebApp-your-API/4-2-B2C) | An ASP.NET Core web application that can sign in a user using Azure AD B2C, get an access token using MSAL.NET and call an API. |
-| [openidconnect-nodejs](https://github.com/AzureADQuickStarts/B2C-WebApp-OpenIDConnect-NodeJS) | A Node.js app that provides a quick and easy way to set up a Web application with Express using OpenID Connect. |
+| [auth-code-flow-nodejs](https://github.com/Azure-Samples/active-directory-b2c-msal-node-sign-in-sign-out-webapp) | A Node.js app that shows how to enable authentication (sign in, sign out and profile edit) in a Node.js web application using Azure Active Directory B2C. The web app uses MSAL-node.|
| [javascript-nodejs-webapi](https://github.com/Azure-Samples/active-directory-b2c-javascript-nodejs-webapi) | A small node.js Web API for Azure AD B2C that shows how to protect your web api and accept B2C access tokens using passport.js. | | [ms-identity-python-webapp](https://github.com/Azure-Samples/ms-identity-python-webapp/blob/master/README_B2C.md) | Demonstrate how to Integrate B2C of Microsoft identity platform with a Python web application. |
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
Previously updated : 02/28/2022 Last updated : 03/20/2022 zone_pivot_groups: b2c-policy-type
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth login experience for all customers across every device and channel eliminating fraud, phishing, and credential reuse.
+
+In this sample tutorial, learn how to integrate Azure Active Directory B2C (Azure AD B2C) authentication with [Transmit Security's](https://www.transmitsecurity.com/bindid) passwordless authentication solution **BindID**. BindID is a passwordless authentication service that uses strong Fast Identity Online (FIDO2) biometric authentication for a reliable omni-channel authentication experience. The solution ensures a smooth sign in experience for all customers across every device and channel, and it eliminates fraud, phishing, and credential reuse.
+ ## Scenario description
The following architecture diagram shows the implementation.
|Step | Description | |:--| :--|
-| 1. | User attempts to log in to an Azure AD B2C application and is forwarded to Azure AD B2CΓÇÖs combined sign-in and sign-up policy.
-| 2. | Azure AD B2C redirects the user to BindID using the OpenID Connect (OIDC) authorization code flow.
+| 1. | User opens Azure AD B2C's sign in page, and then signs in or signs up by entering their username.
+| 2. | Azure AD B2C redirects the user to BindID using an OpenID Connect (OIDC) request.
| 3. | BindID authenticates the user using appless FIDO2 biometrics, such as fingerprint. | 4. | A decentralized authentication response is returned to BindID. | 5. | The OIDC response is passed on to Azure AD B2C.
-| 6.| User is either granted or denied access to the customer application based on the verification results.
-
-## Onboard with BindID
-
-To integrate BindID with your Azure AD B2C instance, you'll need to configure an application in the [BindID Admin
-Portal](https://admin.bindid-sandbox.io/console/). For more information, see [getting started guide](https://developer.bindid.io/docs/guides/admin_portal/topics/getStarted/get_started_admin_portal). You can either create a new application or use one that you already created.
+| 6. | User is either granted or denied access to the customer application based on the verification results.
## Prerequisites
To get started, you'll need:
- A BindID tenant. You can [sign up for free.](https://www.transmitsecurity.com/developer?utm_signup=dev_hub#try) -- If you haven't already done so, [register](./tutorial-register-applications.md) a web application, [and enable ID token implicit grant](./tutorial-register-applications.md#enable-id-token-implicit-grant).
+- If you haven't already done so, [register](./tutorial-register-applications.md) a web application in the Azure portal.
::: zone pivot="b2c-custom-policy" -- Complete the steps in the article [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy).
+- Ability to use Azure AD B2C custom policies. If you can't, complete the steps in [Get started with custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy) to learn how to use custom policies.
::: zone-end
-### Step 1 - Create an application registration in BindID
+## Step 1: Register an app in BindID
-For [Applications](https://admin.bindid-sandbox.io/console/#/applications) to configure your tenant application in BindID, the following information is needed
+Follow the steps in [Configure Your Application](https://developer.bindid.io/docs/guides/quickstart/topics/quickstart_web#step-1-configure-your-application) to add you an application in [BindID Admin Portal](https://admin.bindid-sandbox.io/console/). The following information is needed:
| Property | Description | |:|:|
-| Name | Azure AD B2C/your desired application name|
-| Domain | name.onmicrosoft.com|
-| Redirect URIs| https://jwt.ms |
-| Redirect URLs |Specify the page to which users are redirected after BindID authentication: `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`<br>For Example: `https://fabrikam.b2clogin.com/fabrikam.onmicrosoft.com/oauth2/authresp`<br>If you use a custom domain, enter https://your-domain-name/your-tenant-name.onmicrosoft.com/oauth2/authresp.<br>Replace your-domain-name with your custom domain, and your-tenant-name with the name of your tenant.|
+| Name | Name of your application such as `Azure AD B2C BindID app`|
+| Domain | Enter `your-B2C-tenant-name.onmicrosoft.com`. Replace `your-B2C-tenant` with the name of your Azure AD B2C tenant.|
+| Redirect URIs | [https://jwt.ms/](https://jwt.ms/)
+| Redirect URLs | Enter `https://your-B2C-tenant-name.b2clogin.com/your-B2C-tenant-name.onmicrosoft.com/oauth2/authresp`. Replace `your-B2C-tenant` with the name of your Azure AD B2C tenant. If you use a custom domain, replace `your-B2C-tenant-name.b2clogin.com` with your custom domain such as `contoso.com`.|
->[!NOTE]
->BindID will provide you Client ID and Client Secret, which you'll need later to configure the Identity provider in Azure AD B2C.
+
+After you register the app in BindID, you'll get a **Client ID** and a **Client Secret**. Record the values as you'll need them later to configure BindID as an identity provider in Azure AD B2C.
::: zone pivot="b2c-user-flow"
-### Step 2 - Add a new Identity provider in Azure AD B2C
+## Step 2: Configure BindID as an identity provider in Azure AD B2C
-1. Sign-in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com/#home) as the global administrator of your Azure AD B2C tenant.
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-4. Choose **All services** in the top-left corner of the Azure portal, then search for and select **Azure AD B2C**.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-5. Navigate to **Dashboard** > **Azure Active Directory B2C** > **Identity providers**.
+1. In the top-left corner of the Azure portal, select **All services**, and then search for and select **Azure AD B2C**.
-6. Select **New OpenID Connect Provider**.
+1. Select **Identity providers**, and then select **New OpenID Connect provider**.
-7. Select **Add**.
+1. Enter a **Name**. For example, enter `Login with BindID`.
-### Step 3 - Configure an Identity provider
+1. For **Metadata url**, enter `https://signin.bindid-sandbox.io/.well-known/openid-configuration`.
-1. Select **Identity provider type > OpenID Connect**
+1. For **Client ID**, enter the client ID that you previously recorded in [step 1](#step-1-register-an-app-in-bindid).
-2. Fill out the form to set up the Identity provider:
+1. For **Client secret**, enter the Client secret that you previously recorded in [step 1](#step-1-register-an-app-in-bindid).
- |Property |Value |
- |:|:|
- |Name |Enter BindID ΓÇô Passwordless or a name of your choice|
- |Metadata URL| `https://signin.bindid-sandbox.io/.well-known/openid-configuration` |
- |Client ID|The application ID from the BindID admin UI captured in **Step 1**|
- |Client Secret|The application Secret from the BindID admin UI captured in **Step 1**|
- |Scope|OpenID email|
- |Response type|Code|
- |Response mode|form_post|
- |**Identity provider claims mapping**|
- |User ID|sub|
- |Email|email|
+1. For the **Scope**, enter the `openid email`.
-3. Select **Save** to complete the setup for your new OIDC Identity provider.
+1. For **Response type**, select **code**.
-### Step 4 - Create a user flow policy
+1. For **Response mode**, select **form_post**.
-You should now see BindID as a new OIDC Identity provider listed within your B2C identity providers.
+1. Under **Identity provider claims mapping**, select the following claims:
+
+ 1. **User ID**: `sub`
+ 1. **Email**: `email`
-1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
+1. Select **Save**.
-2. Select **New user flow**
+## Step 3: Create a user flow
-3. Select **Sign up and sign in** > **Version Recommended** > **Create**.
+1. In your Azure AD B2C tenant, under **Policies**, select **User flows**.
-4. Enter a **Name** for your policy.
+1. Select **New user flow**.
-5. In the Identity providers section, select your newly created BindID Identity provider.
+1. Select **Sign up and sign in** user flow type,and then select **Create**.
-6. Select **None** for Local Accounts to disable email and password-based authentication.
+1. Enter a **Name** for your user flow such as `signupsignin`.
-7. Select **Create**
+1. Under **Identity providers**:
+
+ 1. For **Local Accounts**, select **None** to disable email and password-based authentication.
+
+ 1. For **Custom identity providers**, select your newly created BindID Identity provider such as **Login with BindID**.
-8. Select the newly created User Flow
+1. Select **Create**
-9. Select **Run user flow**
+## Step 4: Test your user flow
-10. In the form, select the JWT Application and enter the Replying URL, such as `https://jwt.ms`.
+1. In your Azure AD B2C tenant, select **User flows**.
-11. Select **Run user flow**.
+1. Select the newly created user flow such as **B2C_1_signupsignin**.
-12. The browser will be redirected to the BindID login page. Enter the account name registered during User registration. The user enters the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint.
+1. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`.
-13. Once the authentication challenge is accepted, the browser will redirect the user to the replying URL.
+1. Select the **Run user flow** button. Your browser should be redirected to the BindID sign in page.
+
+1. Enter the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint. Once the authentication challenge is accepted, your browser should be redirect to `https://jwt.ms` which displays the contents of the token returned by Azure AD B2C.
::: zone-end ::: zone pivot="b2c-custom-policy"
-### Step 2 - Create a BindID policy key
+## Step 2: Create a BindID policy key
-Store the client secret that you previously recorded in your Azure AD B2C tenant.
+Add your BindID application's client Secret as a policy key:
1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-4. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-5. On the Overview page, select **Identity Experience Framework**.
+1. On the Overview page, under **Policies**, select **Identity Experience Framework**.
-6. Select **Policy Keys** and then select **Add**.
+1. Select **Policy Keys** and then select **Add**.
-7. For **Options**, choose `Manual`.
+1. For **Options**, choose `Manual`.
-8. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+1. Enter a **Name** for the policy key. For example, `BindIDClientSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
-9. In **Secret**, enter your client secret that you previously recorded.
+1. In **Secret**, enter your client secret that you previously recorded in [step 1](#step-1-register-an-app-in-bindid).
-10. For **Key usage**, select `Signature`.
+1. For **Key usage**, select `Signature`.
-11. Select **Create**.
+1. Select **Create**.
->[!NOTE]
->In Azure Active Directory B2C, [**custom policies**](./user-flow-overview.md) are designed primarily to address complex scenarios. For most scenarios, we recommend that you use built-in [**user flows**](./user-flow-overview.md).
+## Step 3: Configure BindID as an Identity provider
-### Step 3- Configure BindID as an Identity provider
+To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
-To enable users to sign in using BindID, you need to define BindID as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify a specific user has authenticated using digital identity available on their device, proving the userΓÇÖs identity.
+Use the following steps to add BindID as a claims provider:
-You can define BindID as a claims provider by adding it to the **ClaimsProvider** element in the extension file of your policy
+1. Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name:
+
+ 1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository:
+ ```
+ git clone https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack
+ ```
+
+ 1. In all of the files in the **LocalAccounts** directory, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is `contoso`, all instances of `yourtenant.onmicrosoft.com` become `contoso.onmicrosoft.com`.
-1. Open the `TrustFrameworkExtensions.xml`.
+1. Open the `LocalAccounts/ TrustFrameworkExtensions.xml`.
-2. Find the **ClaimsProviders** element. If it dosen't exist, add it under the root element.
+1. Find the **ClaimsProviders** element. If it doesn't exist, add it under the root element.
-3. Add a new **ClaimsProvider** as follows:
+1. Add a new **ClaimsProvider** similar to the one shown below:
-```xml
- <ClaimsProvider>
- <Domain>signin.bindid-sandbox.io</Domain>
- <DisplayName>BindID</DisplayName>
- <TechnicalProfiles>
- <TechnicalProfile Id="BindID-OpenIdConnect">
+ ```xml
+ <ClaimsProvider>
+ <Domain>signin.bindid-sandbox.io</Domain>
<DisplayName>BindID</DisplayName>
- <Protocol Name="OpenIdConnect" />
- <Metadata>
- <Item Key="METADATA">https://signin.bindid-sandbox.io/.well-known/openid-configuration</Item>
- <!-- Update the Client ID below to the BindID Application ID -->
- <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
- <Item Key="response_types">code</Item>
- <Item Key="scope">openid email</Item>
- <Item Key="response_mode">form_post</Item>
- <Item Key="HttpBinding">POST</Item>
- <Item Key="UsePolicyInRedirectUri">false</Item>
- <Item Key="AccessTokenResponseFormat">json</Item>
- </Metadata>
- <CryptographicKeys>
- <Key Id="client_secret" StorageReferenceId="B2C_1A_BindIDClientSecret" />
- </CryptographicKeys>
- <OutputClaims>
- <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
- <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
- <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
- <OutputClaim ClaimTypeReferenceId="authenticationSource"
- DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
- </OutputClaims>
- <OutputClaimsTransformations>
- <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
- <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
- <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
- </OutputClaimsTransformations>
- <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
- </TechnicalProfile>
- </TechnicalProfiles>
- </ClaimsProvider>
-
-```
-
-4. Set **client_id** with your BindID Application ID.
-
-5. Save the file.
-
-### Step 4 - Add a user journey
-
-At this point, the identity provider has been set up, but it's not yet available in any of the sign-in pages. If you don't have your own custom user journey, create a duplicate of an existing template user journey, otherwise continue to the next step.
-
-1. Open the `TrustFrameworkBase.xml` file from the starter pack.
-
-2. Find and copy the entire contents of the **UserJourneys** element that includes `ID=SignUpOrSignIn`.
-
-3. Open the `TrustFrameworkExtensions.xml` and find the UserJourneys element. If the element doesn't exist, add one.
-
-4. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
-
-5. Rename the ID of the user journey. For example, `ID=CustomSignUpSignIn`
-
-### Step 5 - Add the identity provider to a user journey
+ <TechnicalProfiles>
+ <TechnicalProfile Id="BindID-OpenIdConnect">
+ <DisplayName>BindID</DisplayName>
+ <Protocol Name="OpenIdConnect" />
+ <Metadata>
+ <Item Key="METADATA">https://signin.bindid-sandbox.io/.well-known/openid-configuration</Item>
+ <!-- Update the Client ID below to the BindID Application ID -->
+ <Item Key="client_id">00000000-0000-0000-0000-000000000000</Item>
+ <Item Key="response_types">code</Item>
+ <Item Key="scope">openid email</Item>
+ <Item Key="response_mode">form_post</Item>
+ <Item Key="HttpBinding">POST</Item>
+ <Item Key="UsePolicyInRedirectUri">false</Item>
+ <Item Key="AccessTokenResponseFormat">json</Item>
+ </Metadata>
+ <CryptographicKeys>
+ <Key Id="client_secret" StorageReferenceId="B2C_1A_BindIDClientSecret" />
+ </CryptographicKeys>
+ <OutputClaims>
+ <OutputClaim ClaimTypeReferenceId="issuerUserId" PartnerClaimType="sub" />
+ <OutputClaim ClaimTypeReferenceId="email" PartnerClaimType="email" />
+ <OutputClaim ClaimTypeReferenceId="identityProvider" PartnerClaimType="iss" />
+ <OutputClaim ClaimTypeReferenceId="authenticationSource" DefaultValue="socialIdpAuthentication" AlwaysUseDefaultValue="true" />
+ </OutputClaims>
+ <OutputClaimsTransformations>
+ <OutputClaimsTransformation ReferenceId="CreateRandomUPNUserName" />
+ <OutputClaimsTransformation ReferenceId="CreateUserPrincipalName" />
+ <OutputClaimsTransformation ReferenceId="CreateAlternativeSecurityId" />
+ </OutputClaimsTransformations>
+ <UseTechnicalProfileForSessionManagement ReferenceId="SM-SocialLogin" />
+ </TechnicalProfile>
+ </TechnicalProfiles>
+ </ClaimsProvider>
+ ```
+
+1. Set **client_id** with the BindID Application ID that you previously recorded in [step 1](#step-1-register-an-app-in-bindid).
+
+1. Save the changes.
+
+## Step 4: Add a user journey
+
+At this point, you've set up the identity provider, but it's not yet available in any of the sign in pages. If you've your own custom user journey continue to [step 5](#step-5-add-the-identity-provider-to-a-user-journey), otherwise, create a duplicate of an existing template user journey as follows:
+
+1. Open the `LocalAccounts/ TrustFrameworkBase.xml` file from the starter pack.
+
+1. Find and copy the entire contents of the **UserJourney** element that includes `Id=SignUpOrSignIn`.
+
+1. Open the `LocalAccounts/ TrustFrameworkExtensions.xml` and find the **UserJourneys** element. If the element doesn't exist, add one.
+
+1. Paste the entire content of the UserJourney element that you copied as a child of the UserJourneys element.
+
+1. Rename the `Id` of the user journey. For example, `Id=CustomSignUpSignIn`
+
+## Step 5: Add the identity provider to a user journey
Now that you have a user journey, add the new identity provider to the user journey.
-1. Find the orchestration step element that includes Type=`CombinedSignInAndSignUp`, or Type=`ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign-in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`.
+1. Find the orchestration step element that includes `Type=CombinedSignInAndSignUp`, or `Type=ClaimsProviderSelection` in the user journey. It's usually the first orchestration step. The **ClaimsProviderSelections** element contains a list of identity providers that a user can sign in with. The order of the elements controls the order of the sign in buttons presented to the user. Add a **ClaimsProviderSelection** XML element. Set the value of **TargetClaimsExchangeId** to a friendly name, such as `BindIDExchange`.
-2. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier.
+1. In the next orchestration step, add a **ClaimsExchange** element. Set the **Id** to the value of the target claims exchange ID to link the BindID button to `BindID-SignIn` action. Update the value of **TechnicalProfileReferenceId** to the ID of the technical profile you created earlier while adding the claims provider.
The following XML demonstrates orchestration steps of a user journey with the identity provider: - ```xml
-<OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
- <ClaimsProviderSelections>
- ...
- <ClaimsProviderSelection TargetClaimsExchangeId="BindIDExchange" />
- </ClaimsProviderSelections>
- ...
-</OrchestrationStep>
-
-<OrchestrationStep Order="2" Type="ClaimsExchange">
- ...
- <ClaimsExchanges>
- <ClaimsExchange Id="BindIDExchange" TechnicalProfileReferenceId="BindID-OpenIdConnect" />
- </ClaimsExchanges>
-</OrchestrationStep>
+ <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
+ <ClaimsProviderSelections>
+ ...
+ <ClaimsProviderSelection TargetClaimsExchangeId="BindIDExchange" />
+ </ClaimsProviderSelections>
+ ...
+ </OrchestrationStep>
+
+ <OrchestrationStep Order="2" Type="ClaimsExchange">
+ ...
+ <ClaimsExchanges>
+ <ClaimsExchange Id="BindIDExchange" TechnicalProfileReferenceId="BindID-OpenIdConnect" />
+ </ClaimsExchanges>
+ </OrchestrationStep>
```
-### Step 6 - Configure the relying party policy
+## Step 6: Configure the relying party policy
-The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/SocialAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application will receive the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId.
+The relying party policy, for example [SignUpOrSignIn.xml](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/blob/master/LocalAccounts/SignUpOrSignin.xml), specifies the user journey which Azure AD B2C will execute. You can also control what claims are passed to your application by adjusting the **OutputClaims** element of the **PolicyProfile** TechnicalProfile element. In this sample, the application receives the user attributes such as display name, given name, surname, email, objectId, identity provider, and tenantId.
```xml <RelyingParty>
The relying party policy, for example [SignUpSignIn.xml](https://github.com/Azur
</RelyingParty> ```
-### Step 7 - Upload the custom policy
+## Step 7: Upload the custom policy
-1. Sign in to the [Azure portal](https://portal.azure.com/#home).
+1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
-3. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
-4. In the [Azure portal](https://portal.azure.com/#home), search for and select **Azure AD B2C**.
+ 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
-5. Under Policies, select **Identity Experience Framework**.
+1. In the [Azure portal](https://portal.azure.com), search for and select **Azure AD B2C**.
-6. Select **Upload Custom Policy**, and then upload the two policy files that you changed, in the following order: the extension policy, for example `TrustFrameworkExtensions.xml`, then the relying party policy, such as `SignUpSignIn.xml`.
+1. Under **Policies**, select **Identity Experience Framework**.
+1. Select **Upload Custom Policy**, and then upload the files in the **LocalAccounts** starter pack in the following order: the base policy, for example `TrustFrameworkBase.xml`, the localization policy, for example `TrustFrameworkLocalization.xml`, the extension policy, for example `TrustFrameworkExtensions.xml`, and the relying party policy, such as `SignUpOrSignIn.xml`.
-### Step 8 - Test your custom policy
-1. Open the Azure AD B2C tenant and under Policies select **Identity Experience Framework**.
+## Step 8: Test your custom policy
-2. Select your previously created **CustomSignUpSignIn** and select the settings:
- a. **Application**: select the registered app (sample is JWT)
+1. In your Azure AD B2C tenant blade, and under **Policies**, select **Identity Experience Framework**.
+
+1. Under **Custom policies**, select **B2C_1A_signup_signin**.
- b. **Reply URL**: select the **redirect URL** that should show `https://jwt.ms`.
- c. Select **Run now**.
+1. For **Application**, select the web application that you previously registered as part of this article's prerequisites. The **Reply URL** should show `https://jwt.ms`.
-If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C.
+1. Select **Run now**. Your browser should be redirected to the BindID sign in page.
+
+1. Enter the registered account email and authenticates using appless FIDO2 biometrics, such as fingerprint. Once the authentication challenge is accepted, your browser should be redirect to `https://jwt.ms` which displays the contents of the token returned by Azure AD B2C.
::: zone-end
For additional information, review the following articles:
- [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy) -- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)--
+- [Sample custom policies for BindID and Azure AD B2C integration](https://github.com/TransmitSecurity/azure-ad-b2c-bindid-integration)
active-directory-b2c Protocols Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/protocols-overview.md
Previously updated : 11/30/2018 Last updated : 03/31/2022 # Azure AD B2C: Authentication protocols+ Azure Active Directory B2C (Azure AD B2C) provides identity as a service for your apps by supporting two industry standard protocols: OpenID Connect and OAuth 2.0. The service is standards-compliant, but any two implementations of these protocols can have subtle differences. The information in this guide is useful if you write your code by directly sending and handling HTTP requests, rather than by using an open source library. We recommend that you read this page before you dive into the details of each specific protocol. But if you're already familiar with Azure AD B2C, you can go straight to [the protocol reference guides](#protocols).
The information in this guide is useful if you write your code by directly sendi
<!-- TODO: Need link to libraries above --> ## The basics+ Every app that uses Azure AD B2C needs to be registered in your B2C directory in the [Azure portal](https://portal.azure.com). The app registration process collects and assigns a few values to your app: * An **Application ID** that uniquely identifies your app.+ * A **Redirect URI** or **package identifier** that can be used to direct responses back to your app.+ * A few other scenario-specific values. For more information, learn [how to register your application](tutorial-register-applications.md).
-After you register your app, it communicates with Azure Active Directory (Azure AD) by sending requests to the endpoint:
+After you register your app, it communicates with Azure AD B2C by sending requests to the endpoint:
``` https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/oauth2/v2.0/authorize https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/oauth2/v2.0/token ``` + If you're using a [custom domain](custom-domain.md), replace `{tenant}.b2clogin.com` with the custom domain, such as `contoso.com`, in the endpoints. In nearly all OAuth and OpenID Connect flows, four parties are involved in the exchange: --- :::image type="content" source="./media/protocols-overview/protocols_roles.png" alt-text="Diagram showing the four OAuth 2.0 Roles.":::
-* The **authorization server** is the Azure AD endpoint. It securely handles anything related to user information and access. It also handles the trust relationships between the parties in a flow. It is responsible for verifying the user's identity, granting and revoking access to resources, and issuing tokens. It is also known as the identity provider.
+* The **authorization server** is the Azure AD B2C endpoint. It securely handles anything related to user information and access. It also handles the trust relationships between the parties in a flow. It is responsible for verifying the user's identity, granting and revoking access to resources, and issuing tokens. It is also known as the identity provider.
* The **resource owner** is typically the end user. It is the party that owns the data, and it has the power to allow third parties to access that data or resource.
In nearly all OAuth and OpenID Connect flows, four parties are involved in the e
* The **resource server** is where the resource or data resides. It trusts the authorization server to securely authenticate and authorize the OAuth client. It also uses bearer access tokens to ensure that access to a resource can be granted. ## Policies and user flows
-Arguably, Azure AD B2C policies are the most important features of the service. Azure AD B2C extends the standard OAuth 2.0 and OpenID Connect protocols by introducing policies. These allow Azure AD B2C to perform much more than simple authentication and authorization.
-To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called **user flows**. User flows fully describe consumer identity experiences, including sign-up, sign-in, and profile editing. User flows can be defined in an administrative UI. They can be executed by using a special query parameter in HTTP authentication requests.
+Azure AD B2C extends the standard OAuth 2.0 and OpenID Connect protocols by introducing policies. These allow Azure AD B2C to perform much more than simple authentication and authorization.
+
+To help you set up the most common identity tasks, the Azure AD B2C portal includes predefined, configurable policies called **user flows**. User flows fully describe consumer identity experiences, including sign up, sign in, and profile editing. User flows can be defined in an administrative UI. They can be executed by using a special query parameter in HTTP authentication requests.
Policies and user flows are not standard features of OAuth 2.0 and OpenID Connect, so you should take the time to understand them. For more information, see the [Azure AD B2C user flow reference guide](user-flow-overview.md). ## Tokens+ The Azure AD B2C implementation of OAuth 2.0 and OpenID Connect makes extensive use of bearer tokens, including bearer tokens that are represented as JSON web tokens (JWTs). A bearer token is a lightweight security token that grants the "bearer" access to a protected resource.
-The bearer is any party that can present the token. Azure AD must first authenticate a party before it can receive a bearer token. But if the required steps are not taken to secure the token in transmission and storage, it can be intercepted and used by an unintended party.
+The bearer is any party that can present the token. Azure AD B2C must first authenticate a party before it can receive a bearer token. But if the required steps are not taken to secure the token in transmission and storage, it can be intercepted and used by an unintended party.
Some security tokens have built-in mechanisms that prevent unauthorized parties from using them, but bearer tokens do not have this mechanism. They must be transported in a secure channel, such as a transport layer security (HTTPS).
If a bearer token is transmitted outside a secure channel, a malicious party can
For additional bearer token security considerations, see [RFC 6750 Section 5](https://tools.ietf.org/html/rfc6750).
-More information about the different types of tokens that are used in Azure AD B2C are available in [the Azure AD token reference](tokens-overview.md).
+More information about the different types of tokens that are used in Azure AD B2C are available in [the Azure AD B2C token reference](tokens-overview.md).
## Protocols+ When you're ready to review some example requests, you can start with one of the following tutorials. Each corresponds to a particular authentication scenario. If you need help determining which flow is right for you, check out [the types of apps you can build by using Azure AD B2C](application-types.md). * [Build mobile and native applications by using OAuth 2.0](authorization-code-flow.md)
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 07/26/2021 Last updated : 03/16/2022
Helga@contoso.com,1234567,2234567abcdef2234567abcdef,60,Contoso,HardwareKey
``` > [!NOTE]
-> Make sure you include the header row in your CSV file. If a UPN has a single quote, escape it with another single quote. For example, if the UPN is myΓÇÖuser@domain.com, change it to myΓÇÖΓÇÖuser@domain.com when uploading the file.
+> Make sure you include the header row in your CSV file.
Once properly formatted as a CSV file, a Global Administrator can then sign in to the Azure portal, navigate to **Azure Active Directory > Security > MFA > OATH tokens**, and upload the resulting CSV file.
active-directory How To Certificate Based Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-certificate-based-authentication.md
Title: How to configure Azure AD certificate-based authentication without federation (Preview) - Azure Active Directory description: Topic that shows how to configure Azure AD certificate-based authentication in Azure Active Directory -
You can validate the crlDistributionPoint value you provide in the above PowerSh
The below table and graphic indicate how to map information from the CA Certificate to the attributes of the downloaded CRL.
-| CA Certificate Info | |Downloaded CRL Info|
+| CA Certificate Info |= |Downloaded CRL Info|
|-|:-:|-| |Subject |=|Issuer | |Subject Key Identifier |=|Authority Key Identifier (KeyID) |
active-directory Howto Authentication Passwordless Security Key On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-passwordless-security-key-on-premises.md
Run the following steps in each domain and forest in your organization that cont
1. Open a PowerShell prompt using the Run as administrator option. 1. Run the following PowerShell commands to create a new Azure AD Kerberos Server object both in your on-premises Active Directory domain and in your Azure Active Directory tenant.
+### Example 1 prompt for all credentials
> [!NOTE] > Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name.
Run the following steps in each domain and forest in your organization that cont
Set-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred -DomainCredential $domainCred ```
+### Example 2 prompt for cloud credential
> [!NOTE] > If you're working on a domain-joined machine with an account that has domain administrator privileges, you can skip the "-DomainCredential" parameter. If the "-DomainCredential" parameter isn't provided, the current Windows login credential is used to access your on-premises Active Directory Domain Controller.
Run the following steps in each domain and forest in your organization that cont
Set-AzureADKerberosServer -Domain $domain -CloudCredential $cloudCred ```
+### Example 3 prompt for all credentials using modern authentication
> [!NOTE] > If your organization protects password-based sign-in and enforces modern authentication methods such as multifactor authentication, FIDO2, or smart card technology, you must use the `-UserPrincipalName` parameter with the User Principal Name (UPN) of a global administrator. > - Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name.
Run the following steps in each domain and forest in your organization that cont
Set-AzureADKerberosServer -Domain $domain -UserPrincipalName $userPrincipalName -DomainCredential $domainCred ```
+### Example 4 prompt for cloud credentials using modern authentication
+ > [!NOTE]
+ > If you are working on a domain-joined machine with an account that has domain administrator privileges and your organization protects password-based sign-in and enforces modern authentication methods such as multifactor authentication, FIDO2, or smart card technology, you must use the `-UserPrincipalName` parameter with the User Principal Name (UPN) of a global administrator. And you can skip the "-DomainCredential" parameter.
+ > - Replace `contoso.corp.com` in the following example with your on-premises Active Directory domain name.
+ > - Replace `administrator@contoso.onmicrosoft.com` in the following example with the UPN of a global administrator.
+
+ ```powershell
+ # Specify the on-premises Active Directory domain. A new Azure AD
+ # Kerberos Server object will be created in this Active Directory domain.
+ $domain = "contoso.corp.com"
+
+ # Enter a UPN of an Azure Active Directory global administrator
+ $userPrincipalName = "administrator@contoso.onmicrosoft.com"
+
+ # Create the new Azure AD Kerberos Server object in Active Directory
+ # and then publish it to Azure Active Directory.
+ # Open an interactive sign-in prompt with given username to access the Azure AD.
+ Set-AzureADKerberosServer -Domain $domain -UserPrincipalName $userPrincipalName
+ ```
+ ### View and verify the Azure AD Kerberos Server You can view and verify the newly created Azure AD Kerberos Server by using the following command:
Make sure that enough DCs are patched to respond in time to service your resourc
> [!NOTE] > The `/keylist` switch in the `nltest` command is available in client Windows 10 v2004 and later.
+### What if I have a CloudTGT but it never gets exchange for a OnPremTGT when I am using Windows Hello for Business Cloud Trust?
+
+Make sure that the user you are signed in as, is a member of the groups of users that can use FIDO2 as an authentication method, or enable it for all users.
+
+> [!NOTE]
+> Even if you are not explicitly using a security key to sign-in to your device, the underlying technology is dependent on the FIDO2 infrastructure requirements.
### Do FIDO2 security keys work in a Windows login with RODC present in the hybrid environment?
active-directory How To Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-prerequisites.md
Run the [IdFix tool](/office365/enterprise/prepare-directory-attributes-for-sync
2. The PowerShell execution policy on the local server must be set to Undefined or RemoteSigned.
-3. If there's a firewall between your servers and Azure AD, configure the following items:
- - Ensure that agents can make *outbound* requests to Azure AD over the following ports:
-
- | Port number | How it's used |
- | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate. |
- | **443** | Handles all outbound communication with the service. |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
-
- - If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.
- - If your firewall or proxy allows you to specify safe suffixes, add connections to \*.msappproxy.net and \*.servicebus.windows.net. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- - If you are installing against the **US government** cloud, and your firewall or proxy allows you to specify safe suffixes, add connections to:
- - *.microsoftonline.us
- - *.microsoft.us
- - *.msappproxy.us
- - *.windowsazure.us
-
- - Your agents need access to login.windows.net and login.microsoftonline.com for initial registration. Open your firewall for those URLs as well.
- - For certificate validation, unblock the following URLs: mscrl.microsoft.com:80, crl.microsoft.com:80, ocsp.msocsp.com:80, and www\.microsoft.com:80. These URLs are used for certificate validation with other Microsoft products, so you might already have these URLs unblocked.
-
- >[!NOTE]
- > Installing the cloud provisioning agent on Windows Server Core is not supported.
+3. If there's a firewall between your servers and Azure AD, configure see [Firewall and proxy requirements](#firewall-and-proxy-requirements) below.
+
+>[!NOTE]
+> Installing the cloud provisioning agent on Windows Server Core is not supported.
### Additional requirements
To enable TLS 1.2, follow these steps.
``` 1. Restart the server.+
+## Firewall and Proxy requirements
+If there's a firewall between your servers and Azure AD, configure the following items:
+
+- Ensure that agents can make *outbound* requests to Azure AD over the following ports:
+
+ | Port number | How it's used |
+ | | |
+ | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate. |
+ | **443** | Handles all outbound communication with the service. |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
+
+- If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.
+- If your firewall or proxy allows you to specify safe suffixes, add connections:
+
+#### [Public Cloud](#tab/public-cloud)
++
+ |URL |How it's used|
+ |--|--|
+ |&#42;.msappproxy.net</br>&#42;.servicebus.windows.net|The agent uses these URLs to communicate with the Azure AD cloud service. |
+ |&#42;.microsoftonline.com</br>&#42;.microsoft.com</br>&#42;.msappproxy.com</br>&#42;.windowsazure.com|The agent uses these URLs to communicate with the Azure AD cloud service. |
+ |`mscrl.microsoft.com:80` </br>`crl.microsoft.com:80` </br>`ocsp.msocsp.com:80` </br>`www.microsoft.com:80`| The agent uses these URLs to verify certificates.|
+ |login.windows.net</br>|The agent uses these URLs during the registration process.
+++
+#### [U.S. Government Cloud](#tab/us-government-cloud)
+
+ |URL |How it's used|
+ |--|--|
+ |&#42;.msappproxy.us</br>&#42;.servicebus.usgovcloudapi.net|The agent uses these URLs to communicate with the Azure AD cloud service. |
+ |`mscrl.microsoft.us:80` </br>`crl.microsoft.us:80` </br>`ocsp.msocsp.us:80` </br>`www.microsoft.us:80`| The agent uses these URLs to verify certificates.|
+ |login.windows.us </br>secure.aadcdn.microsoftonline-p.com </br>&#42;.microsoftonline.us </br>&#42;.microsoftonline-p.us </br>&#42;.msauth.net </br>&#42;.msauthimages.net </br>&#42;.msecnd.net</br>&#42;.msftauth.net </br>&#42;.msftauthimages.net</br>&#42;.phonefactor.net </br>enterpriseregistration.windows.net</br>management.azure.com </br>policykeyservice.dc.ad.msft.net</br>ctldl.windowsupdate.us:80| The agent uses these URLs during the registration process.
++++
+- If you are unable to add connections, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
++ ## NTLM requirement You should not enable NTLM on the Windows Server that is running the Azure AD Connect Provisioning Agent and if it is enabled you should make sure you disable it.
active-directory Active Directory Signing Key Rollover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-signing-key-rollover.md
Follow the steps below to verify that the key rollover logic is working.
### <a name="other"></a>Web applications / APIs protecting resources using any other libraries or manually implementing any of the supported protocols If you are using some other library or manually implemented any of the supported protocols, you'll need to review the library or your implementation to ensure that the key is being retrieved from either the OpenID Connect discovery document or the federation metadata document. One way to check for this is to do a search in your code or the library's code for any calls out to either the OpenID discovery document or the federation metadata document.
-If they key is being stored somewhere or hardcoded in your application, you can manually retrieve the key and update it accordingly by performing a manual rollover as per the instructions at the end of this guidance document. **It is strongly encouraged that you enhance your application to support automatic rollover** using any of the approaches outline in this article to avoid future disruptions and overhead if the Microsoft identity platform increases its rollover cadence or has an emergency out-of-band rollover.
+If the key is being stored somewhere or hardcoded in your application, you can manually retrieve the key and update it accordingly by performing a manual rollover as per the instructions at the end of this guidance document. **It is strongly encouraged that you enhance your application to support automatic rollover** using any of the approaches outline in this article to avoid future disruptions and overhead if the Microsoft identity platform increases its rollover cadence or has an emergency out-of-band rollover.
## How to test your application to determine if it will be affected
-You can validate whether your application supports automatic key rollover by downloading the scripts and following the instructions in [this GitHub repository.](https://github.com/AzureAD/azure-activedirectory-powershell-tokenkey)
+
+You can validate whether your application supports automatic key rollover by using the following PowerShell scripts.
+
+To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module.
+
+1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module:
+
+ ```powershell
+ Install-Module -Name MSIdentityTools
+ ```
+
+1. Sign in by using the Connect-MgGraph command with an admin account to consent to the required scopes:
+
+ ```powershell
+ Connect-MgGraph -Scope "Application.ReadWrite.All"
+ ```
+
+1. Get the list of available signing key thumbprints:
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint
+ ```
+
+1. Pick any of the key thumbprints and configure Azure Active Directory to use that key with your application (get the app ID from the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps)):
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -KeyThumbprint <Thumbprint>
+ ```
+
+1. Test the web application by signing in to get a new token. The key update change is instantaneous, but make sure you use a new browser session (using, for example, Internet Explorer's "InPrivate," Chrome's "Incognito," or Firefox's "Private" mode) to ensure you are issued a new token.
+
+1. For each of the returned signing key thumbprints, run the `Update-MsIdApplicationSigningKeyThumbprint` cmdlet and test your web application sign-in process.
+
+1. If the web application signs you in properly, it supports automatic rollover. If it doesn't, modify your application to support manual rollover. Check out [Establishing a manual rollover process](#how-to-perform-a-manual-rollover-if-your-application-does-not-support-automatic-rollover) for more information.
+
+1. Run the following script to revert to normal behavior:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -Default
+ ```
## How to perform a manual rollover if your application does not support automatic rollover
-If your application does **not** support automatic rollover, you will need to establish a process that periodically monitors Microsoft identity platform's signing keys and performs a manual rollover accordingly. [This GitHub repository](https://github.com/AzureAD/azure-activedirectory-powershell-tokenkey) contains scripts and instructions on how to do this.
+If your application doesn't support automatic rollover, you need to establish a process that periodically monitors Microsoft identity platform's signing keys and performs a manual rollover accordingly.
+
+To check and update signing keys with PowerShell, you'll need the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module.
+
+1. Install the [MSIdentityTools](https://www.powershellgallery.com/packages/MSIdentityTools) PowerShell Module:
+
+ ```powershell
+ Install-Module -Name MSIdentityTools
+ ```
+
+1. Get the latest signing key (get the tenant ID from the [Azure portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview)):
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Tenant <tenandId> -Latest
+ ```
+
+1. Compare this key against the key your application is currently hardcoded or configured to use.
+
+1. If the latest key is different from the key your application is using, download the latest signing key:
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Latest -DownloadPath <DownloadFolderPath>
+ ```
+
+1. Update your application's code or configuration to use the new key.
+
+1. Configure Azure Active Directory to use that latest key with your application (get the app ID from the [portal](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps)):
+
+ ```powershell
+ Get-MsIdSigningKeyThumbprint -Latest | Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId>
+ ```
+
+1. Test the web application by signing in to get a new token. The key update change is instantaneous, but make sure you use a new browser session (using, for example, Internet Explorer's "InPrivate," Chrome's "Incognito," or Firefox's "Private" mode) to ensure you are issued a new token.
+
+1. If you experience any issues, revert to the previous key you were using and contact Azure support:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -KeyThumbprint <PreviousKeyThumbprint>
+ ```
+
+1. After you update your application to support manual rollover, revert to normal behavior:
+
+ ```powershell
+ Update-MsIdApplicationSigningKeyThumbprint -ApplicationId <ApplicationId> -Default
+ ```
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Last updated 09/27/2021 -+ # Mark your app as publisher verified
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
Last updated 06/01/2021 -+ # Publisher verification
There are a few pre-requisites for publisher verification, some of which will ha
- In Partner Center this user must have of the following [roles](/partner-center/permissions-overview): MPN Admin, Accounts Admin, or a Global Admin (this is a shared role mastered in Azure AD). -- The user performing verification must sign in using [multifactor authentication](../authentication/howto-mfa-getstarted.md).
+- The user performing verification must sign in using [multi-factor authentication](../authentication/howto-mfa-getstarted.md).
- The publisher agrees to the [Microsoft identity platform for developers Terms of Use](/legal/microsoft-identity-platform/terms-of-use).
active-directory Troubleshoot Publisher Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/troubleshoot-publisher-verification.md
Last updated 10/21/2021 -+ # Troubleshoot publisher verification
-If you are unable to complete the process or are experiencing unexpected behavior with [publisher verification](publisher-verification-overview.md), you should start by doing the following if you are receiving errors or seeing unexpected behavior:
+If you're unable to complete the process or are experiencing unexpected behavior with [publisher verification](publisher-verification-overview.md), you should start by doing the following if you're receiving errors or seeing unexpected behavior:
-1. Review the [requirements](publisher-verification-overview.md#requirements) and ensure they have all been met.
+1. Review the [requirements](publisher-verification-overview.md#requirements) and ensure they've all been met.
1. Review the instructions to [mark an app as publisher verified](mark-app-as-publisher-verified.md) and ensure all steps have been performed successfully.
Below are some common issues that may occur during the process.
- **I donΓÇÖt know my Microsoft Partner Network ID (MPN ID) or I donΓÇÖt know who the primary contact for the account is** 1. Navigate to the [MPN enrollment page](https://partner.microsoft.com/dashboard/account/v3/enrollment/joinnow/basicpartnernetwork/new) 1. Sign in with a user account in the org's primary Azure AD tenant
- 1. If an MPN account already exists, this will be recognized and you will be added to the account
+ 1. If an MPN account already exists, this will be recognized and you'll be added to the account
1. Navigate to the [partner profile page](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) where the MPN ID and primary account contact will be listed - **I donΓÇÖt know who my Azure AD Global Administrator (also known as company admin or tenant admin) is, how do I find them? What about the Application Administrator or Cloud Application Administrator?** 1. Sign in to the [Azure AD Portal](https://aad.portal.azure.com) using a user account in your organization's primary tenant 1. Navigate to [Role Management](https://aad.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RolesAndAdministrators)
- 1. Click the desired admin role
+ 1. Select the desired admin role
1. The list of users assigned that role will be displayed - **I don't know who the admin(s) for my MPN account are**
Below are some common issues that may occur during the process.
1. Go to your [partner profile](https://partner.microsoft.com/pcv/accountsettings/connectedpartnerprofile) and verify that: - The MPN ID is correct. - There are no errors or ΓÇ£pending actionsΓÇ¥ shown, and the verification status under Legal business profile and Partner info both say ΓÇ£authorizedΓÇ¥ or ΓÇ£successΓÇ¥.
- 1. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you are signing with a user account from is on the list of associated tenants. To add an additional tenant, follow the instructions [here](/partner-center/multi-tenant-account). Be aware that all Global Admins of any tenant you add will be granted Global Admin privileges on your Partner Center account.
- 1. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you are signing in as is either a Global Admin, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions [here](/partner-center/create-user-accounts-and-set-permissions).
+ 1. Go to the [MPN tenant management page](https://partner.microsoft.com/dashboard/account/v3/tenantmanagement) and confirm that the tenant the app is registered in and that you're signing with a user account from is on the list of associated tenants. To add another tenant, follow the instructions [here](/partner-center/multi-tenant-account). Be aware that all Global Admins of any tenant you add will be granted Global Admin privileges on your Partner Center account.
+ 1. Go to the [MPN User Management page](https://partner.microsoft.com/pcv/users) and confirm the user you're signing in as is either a Global Admin, MPN Admin, or Accounts Admin. To add a user to a role in Partner Center, follow the instructions [here](/partner-center/create-user-accounts-and-set-permissions).
- **When I sign into the Azure AD portal, I do not see any apps registered. Why?**
- Your app registrations may have been created using a different user account in this tenant, a personal/consumer account, or in a different tenant. Ensure you are signed in with the correct account in the tenant where your app registrations were created.
+ Your app registrations may have been created using a different user account in this tenant, a personal/consumer account, or in a different tenant. Ensure you're signed in with the correct account in the tenant where your app registrations were created.
- **I'm getting an error related to multi-factor authentication. What should I do?**
- Ensure [multifactor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) is enabled and **required** for the user you are signing in with and for this scenario. For example, MFA could be:
- - Always required for the user you are signing in with
+ Ensure [multi-factor authentication](../fundamentals/concept-fundamentals-mfa-get-started.md) is enabled and **required** for the user you're signing in with and for this scenario. For example, MFA could be:
+ - Always required for the user you're signing in with
- [Required for Azure management](../conditional-access/howto-conditional-access-policy-azure-management.md).
- - [Required for the type of administrator](../conditional-access/howto-conditional-access-policy-admin-mfa.md) you are signing in with.
+ - [Required for the type of administrator](../conditional-access/howto-conditional-access-policy-admin-mfa.md) you're signing in with.
## Making Microsoft Graph API calls
-If you are having an issue but unable to understand why based on what you are seeing in the UI, it may be helpful to perform further troubleshooting by using Microsoft Graph calls to perform the same operations you can perform in the App Registration portal.
+If you're having an issue but unable to understand why based on what you are seeing in the UI, it may be helpful to perform further troubleshooting by using Microsoft Graph calls to perform the same operations you can perform in the App Registration portal.
-The easiest way to make these requests is using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). You may also consider other options like using [Postman](https://www.postman.com/), or using PowerShell to [invoke a web request](/powershell/module/microsoft.powershell.utility/invoke-webrequest).
+The easiest way to make these requests is to use [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer). You may also consider other options like using [Postman](https://www.postman.com/), or using PowerShell to [invoke a web request](/powershell/module/microsoft.powershell.utility/invoke-webrequest).
You can use Microsoft Graph to both set and unset your appΓÇÖs verified publisher and check the result after performing one of these operations. The result can be seen on both the [application](/graph/api/resources/application) object corresponding to your app registration and any [service principals](/graph/api/resources/serviceprincipal) that have been instantiated from that app. For more information on the relationship between those objects, see: [Application and service principal objects in Azure Active Directory](app-objects-and-service-principals.md).
The following is a list of the potential error codes you may receive, either whe
### MPNAccountNotFoundOrNoAccess
-The MPN ID you provided (`MPNID`) does not exist, or you do not have access to it. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) doesn't exist, or you don't have access to it. Provide a valid MPN ID and try again.
Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Partner Center- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. Can also be caused by the tenant the app is registered in not being added to the MPN account, or an invalid MPN ID. ### MPNGlobalAccountNotFound
-The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
Most commonly caused when an MPN ID is provided which corresponds to a Partner Location Account (PLA). Only Partner Global Accounts are supported. See [Partner Center account structure](/partner-center/account-structure) for more details. ### MPNAccountInvalid
-The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided. ### MPNAccountNotVetted
-The MPN ID (`MPNID`) you provided has not completed the vetting process. Complete this process in Partner Center and try again.
+The MPN ID (`MPNID`) you provided hasn't completed the vetting process. Complete this process in Partner Center and try again.
-Most commonly caused by when the MPN account has not completed the [verification](/partner-center/verification-responses) process.
+Most commonly caused by when the MPN account hasn't completed the [verification](/partner-center/verification-responses) process.
### NoPublisherIdOnAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided. ### MPNIdDoesNotMatchAssociatedMPNAccount
-The MPN ID you provided (`MPNID`) is not valid. Provide a valid MPN ID and try again.
+The MPN ID you provided (`MPNID`) isn't valid. Provide a valid MPN ID and try again.
Most commonly caused by the wrong MPN ID being provided. ### ApplicationNotFound
-The target application (`AppId`) cannot be found. Provide a valid application ID and try again.
+The target application (`AppId`) canΓÇÖt be found. Provide a valid application ID and try again.
Most commonly caused when verification is being performed via Graph API, and the ID of the application provided is incorrect. Note- the ID of the application must be provided, not the AppId/ClientId. ### B2CTenantNotAllowed
-This capability is not supported in an Azure AD B2C tenant.
+This capability isn't supported in an Azure AD B2C tenant.
### EmailVerifiedTenantNotAllowed
-This capability is not supported in an email verified tenant.
+This capability isn't supported in an email verified tenant.
### NoPublisherDomainOnApplication The target application (`AppId`) must have a Publisher Domain set. Set a Publisher Domain and try again.
-Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) is not configured on the app.
+Occurs when a [Publisher Domain](howto-configure-publisher-domain.md) isn't configured on the app.
### PublisherDomainMismatch
-The target application's Publisher Domain (`publisherDomain`) does not match the domain used to perform email verification in Partner Center (`pcDomain`). Ensure these domains match and try again.
+The target application's Publisher Domain (`publisherDomain`) doesn't match the domain used to perform email verification in Partner Center (`pcDomain`). Ensure these domains match and try again.
Occurs when neither the app's [Publisher Domain](howto-configure-publisher-domain.md) nor one of the [custom domains](../fundamentals/add-custom-domain.md) added to the Azure AD tenant match the domain used to perform email verification in Partner Center. ### NotAuthorizedToVerifyPublisher
-You are not authorized to set the verified publisher property on application (<`AppId`)
+You aren't authorized to set the verified publisher property on application (<`AppId`)
Most commonly caused by the signed-in user not being a member of the proper role for the MPN account in Azure AD- see [requirements](publisher-verification-overview.md#requirements) for a list of eligible roles and see [common issues](#common-issues) for more information. ### MPNIdWasNotProvided
-The MPN ID was not provided in the request body or the request content type was not "application/json".
+The MPN ID wasn't provided in the request body or the request content type wasn't "application/json".
### MSANotSupported
-This feature is not supported for Microsoft consumer accounts. Only applications registered in Azure AD by an Azure AD user are supported.
+This feature isn't supported for Microsoft consumer accounts. Only applications registered in Azure AD by an Azure AD user are supported.
### InteractionRequired
-Occurs when multifactor authentication has not been performed before attempting to add a verified publisher to the app. See [common issues](#common-issues) for more information. Note: MFA must be performed in the same session when attempting to add a verified publisher. If MFA is enabled but not required to be performed in the session, the request will fail.
+Occurs when multi-factor authentication hasn't been performed before attempting to add a verified publisher to the app. See [common issues](#common-issues) for more information. Note: MFA must be performed in the same session when attempting to add a verified publisher. If MFA is enabled but not required to be performed in the session, the request will fail.
-The error message displayed will be: "Due to a configuration change made by your administrator, or because you moved to a new location, you must use multifactor authentication to proceed."
+The error message displayed will be: "Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to proceed."
### UnableToAddPublisher
-One of these error messages are displayed: "A verified publisher cannot be added to this application. Contact your administrator for assistance.", or "You are unable to add a verified publisher to this application. Contact your administrator for assistance."
+One of these error messages are displayed: "A verified publisher canΓÇÖt be added to this application. Contact your administrator for assistance.", or "You're unable to add a verified publisher to this application. Contact your administrator for assistance."
First, verify you've met the [publisher verification requirements](publisher-verification-overview.md#requirements).
-When a request to add a verified publisher is made, many signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft does not disclose the specific criteria used to determine whether a request is risky or not. If you received this error and believe the "risky" assessment is incorrect, try waiting and resubmitting the verification request. Some customers have reported success after multiple attempts.
+When a request to add a verified publisher is made, many signals are used to make a security risk assessment. If the request is determined to be risky an error will be returned. For security reasons, Microsoft doesn't disclose the specific criteria used to determine whether a request is risky or not. If you received this error and believe the "risky" assessment is incorrect, try waiting and resubmitting the verification request. Some customers have reported success after multiple attempts.
## Next steps
-If you have reviewed all of the previous information and are still receiving an error from Microsoft Graph, gather as much of the following information as possible related to the failing request and [contact Microsoft support](developer-support-help-options.md#create-an-azure-support-request).
+If you've reviewed all of the previous information and are still receiving an error from Microsoft Graph, gather as much of the following information as possible related to the failing request and [contact Microsoft support](developer-support-help-options.md#create-an-azure-support-request).
- Timestamp - CorrelationId
active-directory V2 Oauth2 On Behalf Of Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-on-behalf-of-flow.md
The OBO flow only works for user principals at this time. A service principal ca
This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
-As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead. For more info about which clients can perform OBO calls, see [limitations](#client-limitations).
- [!INCLUDE [try-in-postman-link](includes/try-in-postman-link.md)]
+## Client limitations
+
+As of May 2018, some implicit-flow derived `id_token` can't be used for OBO flow. Single-page apps (SPAs) should pass an **access** token to a middle-tier confidential client to perform OBO flows instead.
+
+If a client uses the implicit flow to get an id_token, and that client also has wildcards in a reply URL, the id_token can't be used for an OBO flow. However, access tokens acquired through the implicit grant flow can still be redeemed by a confidential client even if the initiating client has a wildcard reply URL registered.
+
+Additionally, applications with custom signing keys cannot be used as middle-tier API's in the OBO flow (this includes enterprise applications configured for single sign-on). This will result in an error because tokens signed with a key controlled by the client cannot be safely accepted.
+ ## Protocol diagram Assume that the user has been authenticated on an application using the [OAuth 2.0 authorization code grant flow](v2-oauth2-auth-code-flow.md) or another login flow. At this point, the application has an access token *for API A* (token A) with the user's claims and consent to access the middle-tier web API (API A). Now, API A needs to make an authenticated request to the downstream web API (API B).
A tenant admin can guarantee that applications have permission to call their req
In some scenarios, you may only have a single pairing of middle-tier and front-end client. In this scenario, you may find it easier to make this a single application, negating the need for a middle-tier application altogether. To authenticate between the front-end and the web API, you can use cookies, an id_token, or an access token requested for the application itself. Then, request consent from this single application to the back-end resource.
-## Client limitations
-
-If a client uses the implicit flow to get an id_token, and that client also has wildcards in a reply URL, the id_token can't be used for an OBO flow. However, access tokens acquired through the implicit grant flow can still be redeemed by a confidential client even if the initiating client has a wildcard reply URL registered.
- ## Next steps Learn more about the OAuth 2.0 protocol and another way to perform service to service auth using client credentials.
active-directory V2 Saml Bearer Assertion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-saml-bearer-assertion.md
For more information about app registration and authentication flow, see:
- [Register an application with the Microsoft identity platform](quickstart-register-app.md) - [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)+
+<!-- _This article was originally contributed by [Umesh Barapatre](https://github.com/umeshbarapatre)._ -->
active-directory Entitlement Management Catalog Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/entitlement-management-catalog-create.md
To include resources in an access package, the resources must exist in a catalog
* Groups can be cloud-created Microsoft 365 Groups or cloud-created Azure AD security groups. Groups that originate in an on-premises Active Directory can't be assigned as resources because their owner or member attributes can't be changed in Azure AD. Groups that originate in Exchange Online as Distribution groups can't be modified in Azure AD either. * Applications can be Azure AD enterprise applications, which include both software as a service (SaaS) applications and your own applications integrated with Azure AD. For more information on how to select appropriate resources for applications with multiple roles, see [Add resource roles](entitlement-management-access-package-resources.md#add-resource-roles). * Sites can be SharePoint Online sites or SharePoint Online site collections.
+> [!NOTE]
+> Search SharePoint Site by site name or an exact URL as the search box is case sensitive.
**Prerequisite roles:** See [Required roles to add resources to a catalog](entitlement-management-delegate.md#required-roles-to-add-resources-to-a-catalog).
You can also delete a catalog by using Microsoft Graph. A user in an appropriate
## Next steps
-[Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
+[Delegate access governance to access package managers](entitlement-management-delegate-managers.md)
active-directory Reference Connect Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-government-cloud.md
The following information describes implementation of Pass-through Authenticatio
Before you deploy the Pass-through Authentication agent, verify whether a firewall exists between your servers and Azure AD. If your firewall or proxy allows Domain Name System (DNS) blocked or safe programs, add the following connections.
-> [!NOTE]
-> The following guidance also applies to installing the [Azure AD Application Proxy connector](../app-proxy/what-is-application-proxy.md) for Azure Government environments.
+> [!IMPORTANT]
+> The following guidance applies only to the following:
+> - the pass-through authentication agent
+> - [Azure AD Application Proxy connector](../app-proxy/what-is-application-proxy.md)
+>
+> For information on URLS for the Azure Active Directory Connect Provisioning Agent see the [installation pre-requisites](../cloud-sync/how-to-prerequisites.md) for cloud sync.
+ |URL |How it's used| |--|--|
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy authentication.md
++
+ Title: Sign-ins using legacy authentication workbook in Azure AD | Microsoft Docs
+description: Learn how to use the sign-ins using legacy authentication workbook.
+
+documentationcenter: ''
++
+editor: ''
+++++ Last updated : 03/16/2022++++++
+# Sign-ins using legacy authentication workbook
+
+Have you ever wondered how you can determine whether it is safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question.
+
+This article gives you an overview of this workbook.
++
+## Description
+
+![Workbook category](./media/workbook-risk-analysis/workbook-category.png)
+
+Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, which was once a widely used industry-standard method for passing user name and password information through a client to an identity provider.
+
+Examples of applications that commonly or only use legacy authentication are:
+
+- Microsoft Office 2013 or older.
+
+- Apps using legacy auth with mail protocols like POP, IMAP, and SMTP AUTH.
++
+Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they are easy to guess and humans are bad at choosing good passwords.
++
+Unfortunately, legacy authentication:
+
+- Does not support multi-factor authentication (MFA) or other strong authentication methods.
+
+- Makes it impossible for your organization to move to passwordless authentication.
+
+To improve the security of your Azure AD tenant and experience of your users, you should disable legacy authentication. However, important user experiences in your tenant might depend on legacy authentication. Before shutting off legacy authentication, you may want to find those cases so you can migrate them to more secure authentication.
+
+The sign-ins using legacy authentication workbook lets you see all legacy authentication sign-ins in your environment so you can find and migrate critical workflows to more secure authentication methods before you shut off legacy authentication.
+
+
+
+
+## Sections
+
+With this workbook, you can distinguish between interactive and non-interactive sign-ins. This workbook highlights which legacy authentication protocols are used throughout your tenant.
+
+The data collection consists of three steps:
+
+1. Select a legacy authentication protocol, and then select an application to filter by users accessing that application.
+
+2. Select a user to see all their legacy authentication sign-ins to the selected app.
+
+3. View all legacy authentication sign-ins for the user to understand how legacy authentication is being used.
+++
+
++
+## Filters
++
+This workbook supports multiple filters:
++
+- Time range (up to 90 days)
+
+- User principal name
+
+- Application
+
+- Status of the sign-in (success or failure)
++
+![Filter options](./media/workbook-legacy-authentication/filter-options.png)
++
+## Best practices
++
+- **[Enable risky sign-in policies](../identity-protection/concept-identity-protection-policies.md)** - To prompt for multi-factor authentication (MFA) on medium risk or above. Enabling the policy reduces the proportion of active real-time risk detections by allowing legitimate users to self-remediate the risk detections with MFA.
+
+- **[Enable a risky user policy](../identity-protection/howto-identity-protection-configure-risk-policies.md#user-risk-with-conditional-access)** - To enable users to securely remediate their accounts when they are high risk. Enabling the policy reduces the number of active at-risk users in your organization by returning the userΓÇÖs credentials to a safe state.
+++++
+## Next steps
+
+- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md).
+
+- For more information about Azure AD workbooks, see [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md).
+
aks Aks Resource Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-resource-health.md
Resource Health receives signals for your managed cluster to determine the clust
- **Degraded**: When there is a health issue requiring your action, Resource Health reports your cluster as *Degraded*.
+Note that the Resource Health for an AKS cluster is different than the Resource Health of its individual resources (*Virtual Machines, ScaleSet Instances, Load Balancer, etc...*).
For additional details on what each health status indicates, visit [Resource Health overview](../service-health/resource-health-overview.md#health-status). ### View historical data
You can also view the past 30 days of historical Resource Health information in
## Next steps
-Run checks on your cluster to further troubleshoot cluster issues by using [AKS Diagnostics](./concepts-diagnostics.md).
+Run checks on your cluster to further troubleshoot cluster issues by using [AKS Diagnostics](./concepts-diagnostics.md).
aks Concepts Clusters Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-clusters-workloads.md
You can build and run modern, portable, microservices-based applications, using
As an open platform, Kubernetes allows you to build your applications with your preferred programming language, OS, libraries, or messaging bus. Existing continuous integration and continuous delivery (CI/CD) tools can integrate with Kubernetes to schedule and deploy releases.
-AKS provides a managed Kubernetes service that reduces the complexity of deployment and core management tasks, like upgrade coordination. The Azure platform manages the AKS control plane, and you only pay for the AKS nodes that run your applications. AKS is built on top of the open-source Azure Kubernetes Service Engine: [aks-engine][aks-engine].
+AKS provides a managed Kubernetes service that reduces the complexity of deployment and core management tasks, like upgrade coordination. The Azure platform manages the AKS control plane, and you only pay for the AKS nodes that run your applications.
## Kubernetes cluster architecture
This article covers some of the core Kubernetes components and how they apply to
- [Kubernetes / AKS scale][aks-concepts-scale] <!-- EXTERNAL LINKS -->
-[aks-engine]: https://github.com/Azure/aks-engine
[cluster-api-provider-azure]: https://github.com/kubernetes-sigs/cluster-api-provider-azure [kubernetes-pods]: https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/ [kubernetes-pod-lifecycle]: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/
aks Ingress Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-basic.md
description: Learn how to install and configure a basic NGINX ingress controller in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/23/2021 Last updated : 03/07/2022
kind: Ingress
metadata: name: hello-world-ingress annotations:
- kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/use-regex: "true"
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
+ ingressClassName: nginx
rules: - http: paths:
kind: Ingress
metadata: name: hello-world-ingress-static annotations:
- kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/rewrite-target: /static/$2 spec:
+ ingressClassName: nginx
rules: - http: paths:
Create the ingress resource using the `kubectl apply -f hello-world-ingress.yaml
``` $ kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
-ingress.extensions/hello-world-ingress created
-ingress.extensions/hello-world-ingress-static created
+ingress.networking.k8s.io/hello-world-ingress created
+ingress.networking.k8s.io/hello-world-ingress-static created
``` ## Test the ingress controller
aks Ingress Internal Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-internal-ip.md
description: Learn how to install and configure an NGINX ingress controller for an internal, private network in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/23/2021 Last updated : 03/04/2022
metadata:
name: hello-world-ingress namespace: ingress-basic annotations:
- kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false" nginx.ingress.kubernetes.io/use-regex: "true"
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
+ ingressClassName: nginx
rules: - http: paths:
The following example output shows the ingress resource is created.
``` $ kubectl apply -f hello-world-ingress.yaml
-ingress.extensions/hello-world-ingress created
+ingress.networking.k8s.io/hello-world-ingress created
``` ## Test the ingress controller
ingress.extensions/hello-world-ingress created
To test the routes for the ingress controller, browse to the two applications with a web client. If needed, you can quickly test this internal-only functionality from a pod on the AKS cluster. Create a test pod and attach a terminal session to it: ```console
-kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11 --namespace ingress-basic
+kubectl run -it --rm aks-ingress-test --image=mcr.microsoft.com/dotnet/runtime-deps:6.0 --namespace ingress-basic
``` Install `curl` in the pod using `apt-get`:
aks Ingress Own Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-own-tls.md
description: Learn how to install and configure an NGINX ingress controller that uses your own certificates in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/23/2021 Last updated : 03/07/2022
metadata:
name: hello-world-ingress namespace: ingress-basic annotations:
- kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true" nginx.ingress.kubernetes.io/rewrite-target: /$2 spec:
+ ingressClassName: nginx
tls: - hosts: - demo.azure.com
The example output shows the ingress resource is created.
``` $ kubectl apply -f hello-world-ingress.yaml
-ingress.extensions/hello-world-ingress created
+ingress.networking.k8s.io/hello-world-ingress created
``` ## Test the ingress configuration
aks Ingress Static Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-static-ip.md
description: Learn how to install and configure an NGINX ingress controller with a static public IP address that uses Let's Encrypt for automatic TLS certificate generation in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/23/2021 Last updated : 03/07/2022 #Customer intent: As a cluster operator or developer, I want to use an ingress controller with a static IP address to handle the flow of incoming traffic and secure my apps using automatically generated TLS certificates.
kind: Ingress
metadata: name: hello-world-ingress annotations:
- kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true" spec:
+ ingressClassName: nginx
tls: - hosts: - demo-aks-ingress.eastus.cloudapp.azure.com
kubectl apply -f hello-world-ingress.yaml --namespace ingress-basic
The output should be similar to this example: ```
-ingress.extensions/hello-world-ingress created
+ingress.networking.k8s.io/hello-world-ingress created
``` ## Verify certificate object
aks Ingress Tls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ingress-tls.md
description: Learn how to install and configure an NGINX ingress controller that uses Let's Encrypt for automatic TLS certificate generation in an Azure Kubernetes Service (AKS) cluster. Previously updated : 04/23/2021 Last updated : 03/04/2022 #Customer intent: As a cluster operator or developer, I want to use an ingress controller to handle the flow of incoming traffic and secure my apps using automatically generated TLS certificates
kind: Ingress
metadata: name: hello-world-ingress annotations:
- kubernetes.io/ingress.class: nginx
- nginx.ingress.kubernetes.io/rewrite-target: /$1
+ nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true" cert-manager.io/cluster-issuer: letsencrypt spec:
+ ingressClassName: nginx
tls: - hosts: - hello-world-ingress.MY_CUSTOM_DOMAIN
kind: Ingress
metadata: name: hello-world-ingress-static annotations:
- kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /static/$2 nginx.ingress.kubernetes.io/use-regex: "true" cert-manager.io/cluster-issuer: letsencrypt spec:
+ ingressClassName: nginx
tls: - hosts: - hello-world-ingress.MY_CUSTOM_DOMAIN
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
The following environment variables are related to the app environment in genera
| `WEBSITE_PRIVATE_EXTENSIONS` | Set to `0` to disable the use of private site extensions. || | `WEBSITE_TIME_ZONE` | By default, the time zone for the app is always UTC. You can change it to any of the valid values that are listed in [TimeZone](/previous-versions/windows/it-pro/windows-vista/cc749073(v=ws.10)). If the specified value isn't recognized, UTC is used. | `Atlantic Standard Time` | | `WEBSITE_ADD_SITENAME_BINDINGS_IN_APPHOST_CONFIG` | In the case of a storage volume failover or reconfiguration, your app is switched over to a standby storage volume. The default setting of `1` prevents your worker process from recycling when the storage infrastructure changes. If you are running a Windows Communication Foundation (WCF) app, disable it by setting it to `0`. The setting is slot-specific, so you should set it in all slots. ||
-| `WEBSITE_PROACTIVE_AUTOHEAL_ENABLED` | By default, a VM instance is proactively "autohealed" when it's using more than 90% of allocated memory for more than 30 seconds, or when 80% of the total requests in the last two minutes take longer than 200 seconds. If a VN instance has triggered one of these rules, the recovery process is an overlapping restart of the instance. Set to `false` to disable this recovery behavior. The default is `true`. For more information, see [Proactive Auto Heal](https://azure.github.io/AppService/2017/08/17/Introducing-Proactive-Auto-Heal.html). ||
+| `WEBSITE_PROACTIVE_AUTOHEAL_ENABLED` | By default, a VM instance is proactively "autohealed" when it's using more than 90% of allocated memory for more than 30 seconds, or when 80% of the total requests in the last two minutes take longer than 200 seconds. If a VM instance has triggered one of these rules, the recovery process is an overlapping restart of the instance. Set to `false` to disable this recovery behavior. The default is `true`. For more information, see [Proactive Auto Heal](https://azure.github.io/AppService/2017/08/17/Introducing-Proactive-Auto-Heal.html). ||
| `WEBSITE_PROACTIVE_CRASHMONITORING_ENABLED` | Whenever the w3wp.exe process on a VM instance of your app crashes due to an unhandled exception for more than three times in 24 hours, a debugger process is attached to the main worker process on that instance, and collects a memory dump when the worker process crashes again. This memory dump is then analyzed and the call stack of the thread that caused the crash is logged in your App ServiceΓÇÖs logs. Set to `false` to disable this automatic monitoring behavior. The default is `true`. For more information, see [Proactive Crash Monitoring](https://azure.github.io/AppService/2021/03/01/Proactive-Crash-Monitoring-in-Azure-App-Service.html). || | `WEBSITE_DAAS_STORAGE_SASURI` | During crash monitoring (proactive or manual), the memory dumps are deleted by default. To save the memory dumps to a storage blob container, specify the SAS URI. || | `WEBSITE_CRASHMONITORING_ENABLED` | Set to `true` to enable [crash monitoring](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html) manually. You must also set `WEBSITE_DAAS_STORAGE_SASURI` and `WEBSITE_CRASHMONITORING_SETTINGS`. The default is `false`. This setting has no effect if remote debugging is enabled. Also, if this setting is set to `true`, [proactive crash monitoring](https://azure.github.io/AppService/2020/08/11/Crash-Monitoring-Feature-in-Azure-App-Service.html) is disabled. ||
applied-ai-services Concept Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom.md
Custom models can be one of two types, [**custom template**](concept-custom-temp
### Custom template model
- The custom template or custom form model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Structured forms such as questionnaires or applications are examples of consistent visual templates. Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields and regions and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
+The custom template or custom form model relies on a consistent visual template to extract the labeled data. The accuracy of your model is affected by variances in the visual structure of your documents. Structured forms such as questionnaires or applications are examples of consistent visual templates.
+
+Your training set will consist of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields, and regions. Template models and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md ).
> [!TIP] >
Custom models can be one of two types, [**custom template**](concept-custom-temp
### Custom neural model
-The custom neural (custom document) model is a deep learning model type that relies on a base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
+The custom neural (custom document) model uses deep learning models and base model trained on a large collection of documents. This model is then fine-tuned or adapted to your data when you train the model with a labeled dataset. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. Custom neural models currently support English-language documents. When you're choosing between the two model types, start with a neural model if it meets your functional needs. See [neural models](concept-custom-neural.md) to learn more about custom document models.
## Build mode
The build custom model operation has added support for the *template* and *neura
* Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document.
-* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance by the company that created the document. Neural models currently only support English text.
+* Neural models support documents that have the same information, but different page structures. Examples of these documents include United States W2 forms, which share the same information, but may vary in appearance across companies. Neural models currently only support English text.
This table provides links to the build mode programming language SDK references and code samples on GitHub:
The table below compares custom template and custom neural features:
The following tools are supported by Form Recognizer v2.1:
-| Feature | Resources |
-|-|-|
-|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|
+| Feature | Resources | Model ID|
+|||:|
+|Custom model| <ul><li>[Form Recognizer labeling tool](https://fott-2-1.azurewebsites.net)</li><li>[REST API](quickstarts/try-sdk-rest-api.md?pivots=programming-language-rest-api#analyze-forms-with-a-custom-model)</li><li>[Client library SDK](quickstarts/try-sdk-rest-api.md)</li><li>[Form Recognizer Docker container](containers/form-recognizer-container-install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)</li></ul>|***custom-model-id***|
The following tools are supported by Form Recognizer v3.0:
-| Feature | Resources |
-|-|-|
-|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|
+| Feature | Resources | Model ID|
+|||:|
+|Custom model| <ul><li>[Form Recognizer Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</li><li>[REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)</li><li>[C# SDK](quickstarts/try-v3-csharp-sdk.md)</li><li>[Python SDK](quickstarts/try-v3-python-sdk.md)</li></ul>|***custom-model-id***|
### Try Form Recognizer
applied-ai-services Try V3 Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-csharp-sdk.md
Previously updated : 03/14/2022 Last updated : 03/16/2022 recommendations: false + <!-- markdownlint-disable MD025 -->
+<!-- markdownlint-disable MD036 -->
+ # Get started: Form Recognizer C# SDK v3.0 | Preview >[!NOTE]
Analyze and extract text, tables, structure, key-value pairs, and named entities
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see the [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-### Add the following code to the Program.cs file:
+**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```csharp using Azure;
for (int i = 0; i < result.Tables.Count; i++)
### General document model output Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-general-document-output.md).-
+___
## Layout model
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URI value to the `Uri fileUri` variable at the top of the script. > * To extract the layout from a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-layout` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document.
-#### Add the following code to the Program.cs file:
+**Add the following code sample to the Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```csharp using Azure;
for (int i = 0; i < result.Tables.Count; i++)
Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/dotnet/FormRecognizer/v3-csharp-sdk-layout-output.md). - ## Prebuilt model Analyze and extract common fields from specific document types using a prebuilt model. In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
Analyze and extract common fields from specific document types using a prebuilt
> * To analyze a given file at a URI, use the `StartAnalyzeDocumentFromUri` method and pass `prebuilt-invoice` as the model ID. The returned value is an `AnalyzeResult` object containing data from the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-#### Add the following code to your Program.cs file:
+**Add the following code sample to your Program.cs file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```csharp
That's it, congratulations!
In this quickstart, you used the Form Recognizer C# SDK to analyze various forms and documents in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API v3.0 reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+## Next step
> [!div class="nextstepaction"]
-> [Form Recognizer C#/.NET reference library](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet&preserve-view=true)
+> [Learn more about Form Recognizer REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
applied-ai-services Try V3 Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-java-sdk.md
Previously updated : 03/08/2022 Last updated : 03/16/2022 recommendations: false- <!-- markdownlint-disable MD025 -->
>[!NOTE] > Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-[Reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/jav)
+[Reference documentation](/jav)
Get started with Azure Form Recognizer using the Java programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
In this quickstart you'll use following features to analyze and extract data and
## Prerequisites * Azure subscription - [Create one for free](https://azure.microsoft.com/free/cognitive-services/).+ * The latest version of [Visual Studio Code](https://code.visualstudio.com/) or your preferred IDE. *See* [Java in Visual Studio Code](https://code.visualstudio.com/docs/languages/java). >[!TIP]
In this quickstart you'll use following features to analyze and extract data and
## Set up
-#### Create a new Gradle project
+### Create a new Gradle project
1. In console window (such as cmd, PowerShell, or Bash), create a new directory for your app called **form-recognizer-app**, and navigate to it.
In this quickstart you'll use following features to analyze and extract data and
1. Accept the default project name (form-recognizer-app)
-#### Install the client library
+### Install the client library
This quickstart uses the Gradle dependency manager. You can find the client library and information for other dependency managers on the [Maven Central Repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer).
This quickstart uses the Gradle dependency manager. You can find the client libr
mavenCentral() } dependencies {
- implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.3")
+ implementation(group = "com.azure", name = "azure-ai-formrecognizer", version = "4.0.0-beta.4")
} ```
-#### Create a Java file
+### Create a Java application
+
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
1. From the form-recognizer-app directory, run the following command:
This quickstart uses the Gradle dependency manager. You can find the client libr
:::image type="content" source="../media/quickstarts/java-directories-2.png" alt-text="Screenshot: Java directory structure":::
-1. Navigate to the `java` directory and create a file called *FormRecognizer.java*.
+1. Navigate to the `java` directory and create a file named **`FormRecognizer.java`**.
> [!TIP] >
This quickstart uses the Gradle dependency manager. You can find the client libr
> * Open a PowerShell window in your project directory by holding down the Shift key and right-clicking the folder. > * Type the following command **New-Item FormRecognizer.java**.
-1. Open the `FormRecognizer.java` file in your preferred editor or IDE and add the following `import` statements:
-
- ```java
- import com.azure.ai.formrecognizer.*;
- import com.azure.ai.formrecognizer.models.AnalyzeResult;
- import com.azure.ai.formrecognizer.models.DocumentLine;
- import com.azure.ai.formrecognizer.models.AnalyzedDocument;
- import com.azure.ai.formrecognizer.models.DocumentOperationResult;
- import com.azure.ai.formrecognizer.models.DocumentWord;
- import com.azure.ai.formrecognizer.models.DocumentTable;
- import com.azure.core.credential.AzureKeyCredential;
- import com.azure.core.util.polling.SyncPoller;
-
- import java.util.List;
- import java.util.Arrays;
- ```
-
-#### Create the **FormRecognizer** class:
-
-Next, you'll need to create a public class for your project:
-
-```java
-public class FormRecognizer {
- // All project code goes here...
-}
-```
-
-> [!TIP]
-> If you would like to try more than one code sample:
->
-> * Select one of the sample code blocks below to copy and paste into your application.
-> * [**Build and run your application**](#build-and-run-your-application).
-> * Comment out that sample code block but keep the set-up code and library directives.
-> * Select another sample code block to copy and paste into your application.
-> * [**Build and run your application**](#build-and-run-your-application).
-> * You can continue to comment out, copy/paste, build, and run the sample blocks of code.
-
-#### Select a code sample to copy and paste into your application's main method:
+1. Open the `FormRecognizer.java` file and select one of the following code samples to copy and paste into your application:
-* [**General document**](#general-document-model)
+ * [**General document**](#general-document-model)
-* [**Layout**](#layout-model)
+ * [**Layout**](#layout-model)
-* [**Prebuilt Invoice**](#prebuilt-model)
+ * [**Prebuilt Invoice**](#prebuilt-model)
> [!IMPORTANT] >
Extract text, tables, structure, key-value pairs, and named entities from docume
> * We've added the file URI value to the `documentUrl` variable in the main method. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-Add the following code to the `FormRecognizer` class. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```java
- private static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
- private static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
+ import com.azure.ai.formrecognizer.*;
+ import com.azure.ai.formrecognizer.models.AnalyzeResult;
+ import com.azure.ai.formrecognizer.models.DocumentLine;
+ import com.azure.ai.formrecognizer.models.AnalyzedDocument;
+ import com.azure.ai.formrecognizer.models.DocumentOperationResult;
+ import com.azure.ai.formrecognizer.models.DocumentWord;
+ import com.azure.ai.formrecognizer.models.DocumentTable;
+ import com.azure.core.credential.AzureKeyCredential;
+ import com.azure.core.util.polling.SyncPoller;
- public static void main(String[] args) {
+ import java.util.List;
+ import java.util.Arrays;
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
- String modelId = "prebuilt-document";
- SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
-
- // pages
- analyzeResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word %s has a confidence score of %.2f%n.",
- documentWord.getContent(),
- documentWord.getConfidence()));
- });
-
- // tables
- List <DocumentTable> tables = analyzeResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n",
- documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ public class FormRecognizer {
+
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ private static final String endpoint = "<your-endpoint>";
+ private static final String key = "<your-key>";
+
+ public static void main(String[] args) {
+
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-document";
+ SyncPoller < DocumentOperationResult, AnalyzeResult> analyzeDocumentPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeResult = analyzeDocumentPoller.getFinalResult();
+
+ // pages
+ analyzeResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding box %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingBox().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word %s has a confidence score of %.2f%n.",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+ });
+
+ // tables
+ List <DocumentTable> tables = analyzeResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n",
+ documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ });
+ System.out.println();
+ }
+
+ // Entities
+ analyzeResult.getEntities().forEach(documentEntity -> {
+ System.out.printf("Entity category : %s, sub-category %s%n: ",
+ documentEntity.getCategory(), documentEntity.getSubCategory());
+ System.out.printf("Entity content: %s%n: ", documentEntity.getContent());
+ System.out.printf("Entity confidence: %.2f%n", documentEntity.getConfidence());
+ });
+
+ // Key-value pairs
+ analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
+ System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
+ System.out.printf("Key content bounding region: %s%n",
+ documentKeyValuePair.getKey().getBoundingRegions().toString());
+
+ if (documentKeyValuePair.getValue() != null) {
+ System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
+ System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
+ }
});
- System.out.println();
}-
- // Entities
- analyzeResult.getEntities().forEach(documentEntity -> {
- System.out.printf("Entity category : %s, sub-category %s%n: ",
- documentEntity.getCategory(), documentEntity.getSubCategory());
- System.out.printf("Entity content: %s%n: ", documentEntity.getContent());
- System.out.printf("Entity confidence: %.2f%n", documentEntity.getConfidence());
- });
-
- // Key-value pairs
- analyzeResult.getKeyValuePairs().forEach(documentKeyValuePair -> {
- System.out.printf("Key content: %s%n", documentKeyValuePair.getKey().getContent());
- System.out.printf("Key content bounding region: %s%n",
- documentKeyValuePair.getKey().getBoundingRegions().toString());
-
- if (documentKeyValuePair.getValue() != null) {
- System.out.printf("Value content: %s%n", documentKeyValuePair.getValue().getContent());
- System.out.printf("Value content bounding region: %s%n", documentKeyValuePair.getValue().getBoundingRegions().toString());
- }
- });
} ```
+### General document model output
+
+Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
+ ## Layout model Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
Extract text, selection marks, text styles, table structures, and bounding regio
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocumentFromUrl` method and pass `prebuilt-layout` as the model Id. The returned value is an `AnalyzeResult` object containing data about the submitted document. > * We've added the file URI value to the `documentUrl` variable in the main method.
-#### Update the **FormRecognizer** class:
-
-Add the following code to the `FormRecognizer` class. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```java
-public static void main(String[] args) {
- DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential(key))
- .endpoint(endpoint)
- .buildClient();
-
- String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
- String modelId = "prebuilt-layout";
-
- SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
- client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
-
- AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
-
- // pages
- analyzeLayoutResult.getPages().forEach(documentPage -> {
- System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
- documentPage.getWidth(),
- documentPage.getHeight(),
- documentPage.getUnit());
-
- // lines
- documentPage.getLines().forEach(documentLine ->
- System.out.printf("Line %s is within a bounding box %s.%n",
- documentLine.getContent(),
- documentLine.getBoundingBox().toString()));
-
- // words
- documentPage.getWords().forEach(documentWord ->
- System.out.printf("Word '%s' has a confidence score of %.2f.%n",
- documentWord.getContent(),
- documentWord.getConfidence()));
-
- // selection marks
- documentPage.getSelectionMarks().forEach(documentSelectionMark ->
- System.out.printf("Selection mark is %s and is within a bounding box %s with confidence %.2f.%n",
- documentSelectionMark.getState().toString(),
- documentSelectionMark.getBoundingBox().toString(),
- documentSelectionMark.getConfidence()));
- });
-
- // tables
- List < DocumentTable > tables = analyzeLayoutResult.getTables();
- for (int i = 0; i < tables.size(); i++) {
- DocumentTable documentTable = tables.get(i);
- System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
- documentTable.getColumnCount());
- documentTable.getCells().forEach(documentTableCell -> {
- System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
- documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ import com.azure.ai.formrecognizer.*;
+ import com.azure.ai.formrecognizer.models.AnalyzeResult;
+ import com.azure.ai.formrecognizer.models.DocumentLine;
+ import com.azure.ai.formrecognizer.models.AnalyzedDocument;
+ import com.azure.ai.formrecognizer.models.DocumentOperationResult;
+ import com.azure.ai.formrecognizer.models.DocumentWord;
+ import com.azure.ai.formrecognizer.models.DocumentTable;
+ import com.azure.core.credential.AzureKeyCredential;
+ import com.azure.core.util.polling.SyncPoller;
+
+ import java.util.List;
+ import java.util.Arrays;
+
+ public class FormRecognizer {
+
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ private static final String endpoint = "<your-endpoint>";
+ private static final String key = "<your-key>";
+
+ public static void main(String[] args) {
+
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String documentUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf";
+ String modelId = "prebuilt-layout";
+
+ SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeLayoutResultPoller =
+ client.beginAnalyzeDocumentFromUrl(modelId, documentUrl);
+
+ AnalyzeResult analyzeLayoutResult = analyzeLayoutResultPoller.getFinalResult();
+
+ // pages
+ analyzeLayoutResult.getPages().forEach(documentPage -> {
+ System.out.printf("Page has width: %.2f and height: %.2f, measured with unit: %s%n",
+ documentPage.getWidth(),
+ documentPage.getHeight(),
+ documentPage.getUnit());
+
+ // lines
+ documentPage.getLines().forEach(documentLine ->
+ System.out.printf("Line %s is within a bounding box %s.%n",
+ documentLine.getContent(),
+ documentLine.getBoundingBox().toString()));
+
+ // words
+ documentPage.getWords().forEach(documentWord ->
+ System.out.printf("Word '%s' has a confidence score of %.2f.%n",
+ documentWord.getContent(),
+ documentWord.getConfidence()));
+
+ // selection marks
+ documentPage.getSelectionMarks().forEach(documentSelectionMark ->
+ System.out.printf("Selection mark is %s and is within a bounding box %s with confidence %.2f.%n",
+ documentSelectionMark.getState().toString(),
+ documentSelectionMark.getBoundingBox().toString(),
+ documentSelectionMark.getConfidence()));
});
- System.out.println();
+
+ // tables
+ List < DocumentTable > tables = analyzeLayoutResult.getTables();
+ for (int i = 0; i < tables.size(); i++) {
+ DocumentTable documentTable = tables.get(i);
+ System.out.printf("Table %d has %d rows and %d columns.%n", i, documentTable.getRowCount(),
+ documentTable.getColumnCount());
+ documentTable.getCells().forEach(documentTableCell -> {
+ System.out.printf("Cell '%s', has row index %d and column index %d.%n", documentTableCell.getContent(),
+ documentTableCell.getRowIndex(), documentTableCell.getColumnIndex());
+ });
+ System.out.println();
+ }
} } ```
+### Layout model output
+
+Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav).
+ ## Prebuilt model
-In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
+Analyze and extract common fields from specific document types using a prebuilt model. In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-#### Update the **FormRecognizer** class:
-
-Replace the existing FormRecognizer class with the following code (be certain to update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal):
+**Add the following code sample to the `FormRecognizer.java` file. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```java
-public class FormRecognizer {
+ import com.azure.ai.formrecognizer.*;
+ import com.azure.ai.formrecognizer.models.AnalyzeResult;
+ import com.azure.ai.formrecognizer.models.DocumentLine;
+ import com.azure.ai.formrecognizer.models.AnalyzedDocument;
+ import com.azure.ai.formrecognizer.models.DocumentOperationResult;
+ import com.azure.ai.formrecognizer.models.DocumentWord;
+ import com.azure.ai.formrecognizer.models.DocumentTable;
+ import com.azure.core.credential.AzureKeyCredential;
+ import com.azure.core.util.polling.SyncPoller;
- static final String key = "PASTE_YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY_HERE";
- static final String endpoint = "PASTE_YOUR_FORM_RECOGNIZER_ENDPOINT_HERE";
+ import java.util.List;
+ import java.util.Arrays;
- public static void main(String[] args) {
+ public class FormRecognizer {
- DocumentAnalysisClient documentAnalysisClient = new DocumentAnalysisClientBuilder()
- .credential(new AzureKeyCredential("{key}"))
- .endpoint("{endpoint}")
- .buildClient();
+ // set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+ private static final String endpoint = "<your-endpoint>";
+ private static final String key = "<your-key>";
- String invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
+ public static void main(final String[] args) throws IOException {
+
+ // create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
+ DocumentAnalysisClient client = new DocumentAnalysisClientBuilder()
+ .credential(new AzureKeyCredential(key))
+ .endpoint(endpoint)
+ .buildClient();
+
+ // sample document
+ String invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf";
String modelId = "prebuilt-invoice";
- PollerFlux < DocumentOperationResult, AnalyzeResult > analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl("prebuilt-invoice", invoiceUrl);
-
- Mono < AnalyzeResult > analyzeInvoiceResultMono = analyzeInvoicePoller
- .last()
- .flatMap(pollResponse - > {
- if (pollResponse.getStatus().isComplete()) {
- System.out.println("Polling completed successfully");
- return pollResponse.getFinalResult();
- } else {
- return Mono.error(new RuntimeException("Polling completed unsuccessfully with status:" +
- pollResponse.getStatus()));
- }
- });
+ SyncPoller < DocumentOperationResult, AnalyzeResult > analyzeInvoicePoller = client.beginAnalyzeDocumentFromUrl(modelId, invoiceUrl);
- analyzeInvoiceResultMono.subscribe(analyzeInvoiceResult - > {
- for (int i = 0; i < analyzeInvoiceResult.getDocuments().size(); i++) {
- AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
- Map < String, DocumentField > invoiceFields = analyzedInvoice.getFields();
- System.out.printf("-- Analyzing invoice %d --%n", i);
- DocumentField vendorNameField = invoiceFields.get("VendorName");
- if (vendorNameField != null) {
- if (DocumentFieldType.STRING == vendorNameField.getType()) {
- String merchantName = vendorNameField.getValueString();
- System.out.printf("Vendor Name: %s, confidence: %.2f%n",
- merchantName, vendorNameField.getConfidence());
- }
- }
-
- DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
- if (vendorAddressField != null) {
- if (DocumentFieldType.STRING == vendorAddressField.getType()) {
- String merchantAddress = vendorAddressField.getValueString();
- System.out.printf("Vendor address: %s, confidence: %.2f%n",
- merchantAddress, vendorAddressField.getConfidence());
- }
- }
-
- DocumentField customerNameField = invoiceFields.get("CustomerName");
- if (customerNameField != null) {
- if (DocumentFieldType.STRING == customerNameField.getType()) {
- String merchantAddress = customerNameField.getValueString();
- System.out.printf("Customer Name: %s, confidence: %.2f%n",
- merchantAddress, customerNameField.getConfidence());
- }
- }
+ AnalyzeResult analyzeInvoiceResult = analyzeInvoicePoller.getFinalResult();
- DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
- String customerAddr = customerAddressRecipientField.getValueString();
- System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
- customerAddr, customerAddressRecipientField.getConfidence());
- }
- }
+ for (int i = 0; i < analyzeInvoiceResult.getDocuments().size(); i++) {
+ AnalyzedDocument analyzedInvoice = analyzeInvoiceResult.getDocuments().get(i);
+ Map < String, DocumentField > invoiceFields = analyzedInvoice.getFields();
+ System.out.printf("-- Analyzing invoice %d --%n", i);
+ System.out.printf("Analyzed document has doc type %s with confidence : %.2f%n.",
+ analyzedInvoice.getDocType(), analyzedInvoice.getConfidence());
- DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
- if (invoiceIdField != null) {
- if (DocumentFieldType.STRING == invoiceIdField.getType()) {
- String invoiceId = invoiceIdField.getValueString();
- System.out.printf("Invoice ID: %s, confidence: %.2f%n",
- invoiceId, invoiceIdField.getConfidence());
+ DocumentField vendorNameField = invoiceFields.get("VendorName");
+ if (vendorNameField != null) {
+ if (DocumentFieldType.STRING == vendorNameField.getType()) {
+ String merchantName = vendorNameField.getValueString();
+ Float confidence = vendorNameField.getConfidence();
+ System.out.printf("Vendor Name: %s, confidence: %.2f%n",
+ merchantName, vendorNameField.getConfidence());
+ }
+ }
+
+ DocumentField vendorAddressField = invoiceFields.get("VendorAddress");
+ if (vendorAddressField != null) {
+ if (DocumentFieldType.STRING == vendorAddressField.getType()) {
+ String merchantAddress = vendorAddressField.getValueString();
+ System.out.printf("Vendor address: %s, confidence: %.2f%n",
+ merchantAddress, vendorAddressField.getConfidence());
+ }
+ }
+
+ DocumentField customerNameField = invoiceFields.get("CustomerName");
+ if (customerNameField != null) {
+ if (DocumentFieldType.STRING == customerNameField.getType()) {
+ String merchantAddress = customerNameField.getValueString();
+ System.out.printf("Customer Name: %s, confidence: %.2f%n",
+ merchantAddress, customerNameField.getConfidence());
+ }
+ }
+
+ DocumentField customerAddressRecipientField = invoiceFields.get("CustomerAddressRecipient");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.STRING == customerAddressRecipientField.getType()) {
+ String customerAddr = customerAddressRecipientField.getValueString();
+ System.out.printf("Customer Address Recipient: %s, confidence: %.2f%n",
+ customerAddr, customerAddressRecipientField.getConfidence());
+ }
+ }
+
+ DocumentField invoiceIdField = invoiceFields.get("InvoiceId");
+ if (invoiceIdField != null) {
+ if (DocumentFieldType.STRING == invoiceIdField.getType()) {
+ String invoiceId = invoiceIdField.getValueString();
+ System.out.printf("Invoice ID: %s, confidence: %.2f%n",
+ invoiceId, invoiceIdField.getConfidence());
+ }
+ }
+
+ DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
+ if (customerNameField != null) {
+ if (DocumentFieldType.DATE == invoiceDateField.getType()) {
+ LocalDate invoiceDate = invoiceDateField.getValueDate();
+ System.out.printf("Invoice Date: %s, confidence: %.2f%n",
+ invoiceDate, invoiceDateField.getConfidence());
+ }
+ }
+
+ DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
+ if (customerAddressRecipientField != null) {
+ if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
+ Float invoiceTotal = invoiceTotalField.getValueFloat();
+ System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
+ invoiceTotal, invoiceTotalField.getConfidence());
+ }
+ }
+
+ DocumentField invoiceItemsField = invoiceFields.get("Items");
+ if (invoiceItemsField != null) {
+ System.out.printf("Invoice Items: %n");
+ if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
+ List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
+ invoiceItems.stream()
+ .filter(invoiceItem -> DocumentFieldType.MAP == invoiceItem.getType())
+ .map(formField -> formField.getValueMap())
+ .forEach(formFieldMap -> formFieldMap.forEach((key, formField) -> {
+ // See a full list of fields found on an invoice here:
+ // https://aka.ms/formrecognizer/invoicefields
+ if ("Description".equals(key)) {
+ if (DocumentFieldType.STRING == formField.getType()) {
+ String name = formField.getValueString();
+ System.out.printf("Description: %s, confidence: %.2fs%n",
+ name, formField.getConfidence());
}
- }
-
- DocumentField invoiceDateField = invoiceFields.get("InvoiceDate");
- if (customerNameField != null) {
- if (DocumentFieldType.DATE == invoiceDateField.getType()) {
- LocalDate invoiceDate = invoiceDateField.getValueDate();
- System.out.printf("Invoice Date: %s, confidence: %.2f%n",
- invoiceDate, invoiceDateField.getConfidence());
+ }
+ if ("Quantity".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float quantity = formField.getValueFloat();
+ System.out.printf("Quantity: %f, confidence: %.2f%n",
+ quantity, formField.getConfidence());
}
- }
-
- DocumentField invoiceTotalField = invoiceFields.get("InvoiceTotal");
- if (customerAddressRecipientField != null) {
- if (DocumentFieldType.FLOAT == invoiceTotalField.getType()) {
- Float invoiceTotal = invoiceTotalField.getValueFloat();
- System.out.printf("Invoice Total: %.2f, confidence: %.2f%n",
- invoiceTotal, invoiceTotalField.getConfidence());
+ }
+ if ("UnitPrice".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float unitPrice = formField.getValueFloat();
+ System.out.printf("Unit Price: %f, confidence: %.2f%n",
+ unitPrice, formField.getConfidence());
}
- }
-
- DocumentField invoiceItemsField = invoiceFields.get("Items");
- if (invoiceItemsField != null) {
- System.out.printf("Invoice Items: %n");
- if (DocumentFieldType.LIST == invoiceItemsField.getType()) {
- List < DocumentField > invoiceItems = invoiceItemsField.getValueList();
- invoiceItems.stream()
- .filter(invoiceItem - > DocumentFieldType.MAP == invoiceItem.getType())
- .map(formField - > formField.getValueMap())
- .forEach(formFieldMap - > formFieldMap.forEach((key, formField) - > {
- // See a full list of fields found on an invoice here:
- // https://aka.ms/formrecognizer/invoicefields
- if ("Description".equals(key)) {
- if (DocumentFieldType.STRING == formField.getType()) {
- String name = formField.getValueString();
- System.out.printf("Description: %s, confidence: %.2fs%n",
- name, formField.getConfidence());
- }
- }
- if ("Quantity".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float quantity = formField.getValueFloat();
- System.out.printf("Quantity: %f, confidence: %.2f%n",
- quantity, formField.getConfidence());
- }
- }
- if ("UnitPrice".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float unitPrice = formField.getValueFloat();
- System.out.printf("Unit Price: %f, confidence: %.2f%n",
- unitPrice, formField.getConfidence());
- }
- }
- if ("ProductCode".equals(key)) {
- if (DocumentFieldType.FLOAT == formField.getType()) {
- Float productCode = formField.getValueFloat();
- System.out.printf("Product Code: %f, confidence: %.2f%n",
- productCode, formField.getConfidence());
- }
- }
- }));
+ }
+ if ("ProductCode".equals(key)) {
+ if (DocumentFieldType.FLOAT == formField.getType()) {
+ Float productCode = formField.getValueFloat();
+ System.out.printf("Product Code: %f, confidence: %.2f%n",
+ productCode, formField.getConfidence());
}
- }
- }
- });
+ }
+ }));
+ }
+ }
+ }
+ }
}
-}
```
+### Prebuilt model output
+
+Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/jav)
+ ## Build and run your application Navigate back to your main project directoryΓÇö**form-recognizer-app**.
That's it, congratulations!
In this quickstart, you used the Form Recognizer Java SDK to analyze various forms and documents in different ways. Next, explore the reference documentation to learn about Form Recognizer API in more depth.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API v3.0 reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+## Next step
> [!div class="nextstepaction"]
-> [Form Recognizer Java library reference](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0-beta.1/https://docsupdatetracker.net/index.html)
+> [Learn more about Form Recognizer REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
applied-ai-services Try V3 Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-python-sdk.md
Previously updated : 03/08/2022 Last updated : 03/15/2022 recommendations: false- <!-- markdownlint-disable MD025 --> # Get started: Form Recognizer Python SDK v3.0 | Preview >[!NOTE]
-> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
+> Form Recognizer v3.0 is currently in public preview. Some features may not be supported or have limited capabilities.
-[Reference documentation](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0b3/https://docsupdatetracker.net/index.html) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/) | [Package (PyPi)](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/) | [Samples](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
+[Reference documentation](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer?view=azure-python-preview&preserve-view=true) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/) | [Package (PyPi)](https://pypi.org/project/azure-ai-formrecognizer/3.2.0b3/) | [Samples](https://github.com/Azure/azure-sdk-for-python/blob/azure-ai-formrecognizer_3.2.0b3/sdk/formrecognizer/azure-ai-formrecognizer/samples/README.md)
Get started with Azure Form Recognizer using the Python programming language. Azure Form Recognizer is a cloud-based Azure Applied AI Service that uses machine learning to extract key-value pairs, text, and tables from your documents. You can easily call Form Recognizer models by integrating our client library SDks into your workflows and applications. We recommend that you use the free service when you're learning the technology. Remember that the number of free pages is limited to 500 per month.
In this quickstart you'll use following features to analyze and extract data and
Open a terminal window in your local environment and install the Azure Form Recognizer client library for Python with pip: ```console
-pip install azure-ai-formrecognizer==3.2.0b2
+pip install azure-ai-formrecognizer==3.2.0b3
``` ### Create a new Python application
-Create a new Python file called **form_recognizer_quickstart.py** in your preferred editor or IDE. Then import the following libraries:
+To interact with the Form Recognizer service, you'll need to create an instance of the `DocumentAnalysisClient` class. To do so, you'll create an `AzureKeyCredential` with your key from the Azure portal and a `DocumentAnalysisClient` instance with the `AzureKeyCredential` and your Form Recognizer `endpoint`.
-```python
-import os
-from azure.core.exceptions import ResourceNotFoundError
-from azure.ai.formrecognizer import DocumentAnalysisClient
-from azure.core.credentials import AzureKeyCredential
-```
-
-### Create variables for your Azure resource API endpoint and key
-
-```python
-endpoint = "YOUR_FORM_RECOGNIZER_ENDPOINT"
-key = "YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY"
-```
-
-At this point, your Python application should contain the following lines of code:
-
-```python
-import os
-from azure.core.exceptions import ResourceNotFoundError
-from azure.ai.formrecognizer import DocumentAnalysisClient
-from azure.core.credentials import AzureKeyCredential
-
-endpoint = "YOUR_FORM_RECOGNIZER_ENDPOINT"
-key = "YOUR_FORM_RECOGNIZER_SUBSCRIPTION_KEY"
-
-```
+1. Create a new Python file called **form_recognizer_quickstart.py** in your preferred editor or IDE.
-> [!TIP]
-> If you would like to try more than one code sample:
->
-> * Select one of the sample code blocks below to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * Comment out that sample code block but keep the set-up code and library directives.
-> * Select another sample code block to copy and paste into your application.
-> * [**Run your application**](#run-your-application).
-> * You can continue to comment out, copy/paste, and run the sample blocks of code.
-
-### Select a code sample to copy and paste into your application:
+1. Open the **form_recognizer_quickstart.py** file and select one of the following code samples to copy and paste into your application:
* [**General document**](#general-document-model)
Extract text, tables, structure, key-value pairs, and named entities from docume
> * We've added the file URL value to the `docUrl` variable in the `analyze_general_documents` function. > * For simplicity, all the entity fields that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [General document](../concept-general-document.md#named-entity-recognition-ner-categories) concept page.
-###### Add the following code to your general document application on a line below the `key` variable
+<!-- markdownlint-disable MD036 -->
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```python
+# import libraries
+import os
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+
+# set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+endpoint = "<your-endpoint>"
+key = "<your-key>"
+ def format_bounding_region(bounding_regions): if not bounding_regions: return "N/A"
def format_bounding_box(bounding_box):
def analyze_general_documents():
- # sample document
+ # sample document
docUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
- document_analysis_client = DocumentAnalysisClient(
+ # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
endpoint=endpoint, credential=AzureKeyCredential(key) )
if __name__ == "__main__":
analyze_general_documents() ```
+### General document model output
+
+Visit the Azure samples repository on GitHub to view the [general document model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-general-document-output.md)
+
+___
+ ## Layout model Extract text, selection marks, text styles, table structures, and bounding region coordinates from documents.
Extract text, selection marks, text styles, table structures, and bounding regio
> * We've added the file URL value to the `formUrl` variable in the `analyze_layout` function. > * To analyze a given file at a URL, you'll use the `begin_analyze_document_from_url` method and pass in `prebuilt-layout` as the model Id. The returned value is a `result` object containing data about the submitted document.
-#### Add the following code to your layout application on the line below the `key` variable
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```python
+# import libraries
+import os
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+
+# set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+endpoint = "<your-endpoint>"
+key = "<your-key>"
+ def format_bounding_box(bounding_box): if not bounding_box: return "N/A"
def analyze_layout():
# sample form document formUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-layout.pdf"
+ # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
document_analysis_client = DocumentAnalysisClient( endpoint=endpoint, credential=AzureKeyCredential(key) )
if __name__ == "__main__":
```
+### Layout model output
+
+Visit the Azure samples repository on GitHub to view the [layout model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-layout-output.md)
+
+___
+ ## Prebuilt model
-In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
+Analyze and extract common fields from specific document types using a prebuilt model. In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> [!TIP] > You aren't limited to invoicesΓÇöthere are several prebuilt models to choose from, each of which has its own set of supported fields. The model to use for the analyze operation depends on the type of document to be analyzed. See [**model data extraction**](../concept-model-overview.md#model-data-extraction).
-#### Try the prebuilt invoice model
- > [!div class="checklist"] > > * Analyze an invoice using the prebuilt-invoice model. You can use our [sample invoice document](https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf) for this quickstart.
In this example, we'll analyze an invoice using the **prebuilt-invoice** model.
> * To analyze a given file at a URI, you'll use the `beginAnalyzeDocuments` method and pass `PrebuiltModels.Invoice` as the model Id. The returned value is a `result` object containing data about the submitted document. > * For simplicity, all the key-value pairs that the service returns are not shown here. To see the list of all supported fields and corresponding types, see our [Invoice](../concept-invoice.md#field-extraction) concept page.
-#### Add the following code to your prebuilt invoice application below the `key` variable
+**Add the following code sample to your form_recognizer_quickstart.py application. Make sure you update the key and endpoint variables with values from your Form Recognizer instance in the Azure portal:**
```python
+# import libraries
+import os
+from azure.ai.formrecognizer import DocumentAnalysisClient
+from azure.core.credentials import AzureKeyCredential
+
+# set `<your-endpoint>` and `<your-key>` variables with the values from the Azure portal
+endpoint = "<your-endpoint>"
+key = "<your-key>"
def format_bounding_region(bounding_regions): if not bounding_regions:
def analyze_invoice():
invoiceUrl = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-REST-api-samples/master/curl/form-recognizer/sample-invoice.pdf"
+ # create your `DocumentAnalysisClient` instance and `AzureKeyCredential` variable
document_analysis_client = DocumentAnalysisClient( endpoint=endpoint, credential=AzureKeyCredential(key) )
if __name__ == "__main__":
analyze_invoice() ```
+### Prebuilt model output
+
+Visit the Azure samples repository on GitHub to view the [prebuilt invoice model output](https://github.com/Azure-Samples/cognitive-services-quickstart-code/blob/master/python/FormRecognizer/v3-python-sdk-prebuilt-invoice-output.md)
+ ## Run your application 1. Navigate to the folder where you have your **form_recognizer_quickstart.py** file.
if __name__ == "__main__":
python form_recognizer_quickstart.py ```
-That's it, congratulations!
+That's it, congratulations!
In this quickstart, you used the Form Recognizer Python SDK to analyze various forms in different ways. Next, explore the reference documentation to learn more about Form Recognizer v3.0 API.
-## Next steps
-
-> [!div class="nextstepaction"]
-> [REST API v3.0 reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-1/operations/AnalyzeDocument)
+## Next step
> [!div class="nextstepaction"]
-> [Form Recognizer Python reference library](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0b1/https://docsupdatetracker.net/index.html)
+> [Learn more about Form Recognizer REST API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
azure-arc Managed Instance Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/managed-instance-disaster-recovery.md
Previously updated : 01/27/2022 Last updated : 03/16/2022
On the geo-secondary DR instance, run the following command to promote it to pri
```azurecli az sql mi-arc dag update -k test --name dagtests --use-k8s --role force-primary-allow-data-loss ```
+## Limitation
+
+When you use [SQL Server Management Studio Object Explorer to create a database](/sql/relational-databases/databases/create-a-database#SSMSProcedure), the application returns an error. You can [create new databases with T-SQL](/sql/relational-databases/databases/create-a-database#TsqlProcedure).
azure-cache-for-redis Cache Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-failover.md
Previously updated : 11/3/2021 Last updated : 03/15/2022+ # Failover and patching for Azure Cache for Redis
Most client libraries attempt to reconnect to the cache if they're configured to
### Can I be notified in advance of planned maintenance?
-Azure Cache for Redis publishes notifications on a publish/subscribe (pub/sub) channel called [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md) around 30 seconds before planned updates. The notifications are runtime notifications.
-
-The notifications are for applications that use circuit breakers to bypass the cache or applications that buffer commands. For example, the cache could be bypassed during any planned updates.
-
-The `AzureRedisEvents` channel isn't a mechanism that can notify you days or hours in advance. The channel can notify clients of any upcoming planned server maintenance events that might affect server availability.
-
-Many popular Redis client libraries support subscribing to pub/sub channels. Receiving notifications from the `AzureRedisEvents` channel is usually a simple addition to your client application.
-
-Once your application is subscribed to `AzureRedisEvents`, it receives a notification 30 seconds before any node is affected by a maintenance event. The notification includes details about the upcoming event and indicates whether it affects a primary or replica node.
-
-Another notification is sent minutes later when the maintenance operation is complete.
+Azure Cache for Redis publishes runtime maintenance notifications on a publish/subscribe (pub/sub) channel called `AzureRedisEvents`. Many popular Redis client libraries support subscribing to pub/sub channels. Receiving notifications from the `AzureRedisEvents` channel is usually a simple addition to your client application. For more information about maintenance events, please see [AzureRedisEvents](https://github.com/Azure/AzureCacheForRedis/blob/main/AzureRedisEvents.md).
-Your application uses the content in the notification to take action to avoid using the cache while the maintenance is done. A cache might implement a circuit breaker pattern where traffic is routed away from the cache during the maintenance operation. Instead, traffic is sent directly to a persistent store. The notification isn't intended to allow time for a person to be alerted and take manual action.
-
-In most cases, your application doesn't need to subscribe to `AzureRedisEvents` or respond to notifications. Instead, we recommend implementing [building in resilience](#build-in-resiliency).
-
-With sufficient resilience, applications gracefully handle any brief connection loss or cache unavailability like that experienced during node maintenance. ItΓÇÖs also possible that your application might unexpectedly lose its connection to the cache without warning from `AzureRedisEvents` because of network errors or other events.
-
-We only recommend subscribing to `AzureRedisEvents` in a few noteworthy cases:
--- Applications with extreme performance requirements, where even minor delays must be avoided. In such scenarios, traffic could be seamlessly rerouted to a backup cache before maintenance begins on the current cache.-- Applications that explicitly read data from replica rather than primary nodes. During maintenance on a replica node, the application could temporarily switch to read data from primary nodes.-- Applications that can't risk write operations failing silently or succeeding without confirmation, which can happen as connections are being closed for maintenance. If those cases would result in dangerous data loss, the application can proactively pause or redirect write commands before the maintenance is scheduled to begin.
+> [!NOTE]
+> The `AzureRedisEvents` channel isn't a mechanism that can notify you days or hours in advance. The channel can notify clients of any upcoming planned server maintenance events that might affect server availability.
### Client network-configuration changes
azure-cache-for-redis Cache High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-high-availability.md
description: Learn about Azure Cache for Redis high availability features and op
Previously updated : 02/02/2022 Last updated : 03/16/2022
Azure Cache for Redis implements high availability by using multiple VMs, called
| Option | Description | Availability | Standard | Premium | Enterprise | | - | - | - | :: | :: | :: |
-| [Standard replication](#standard-replication)| Dual-node replicated configuration in a single datacenter with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|-|
+| [Standard replication](#standard-replication)| Dual-node replicated configuration in a single data center with automatic failover | 99.9% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |Γ£ö|Γ£ö|-|
| [Zone redundancy](#zone-redundancy) | Multi-node replicated configuration across AZs, with automatic failover | 99.9% in Premium; 99.99% in Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
-| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Up to 99.999% (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Γ£ö|Γ£ö|
+| [Geo-replication](#geo-replication) | Linked cache instances in two regions, with user-controlled failover | Premium; Enterprise (see [details](https://azure.microsoft.com/support/legal/sla/cache/v1_1/)) |-|Passive|Active|
## Standard replication
If the primary node in a Redis cache is unavailable, the replica promotes itself
A primary node can go out of service as part of a planned maintenance activity, such as Redis software or operating system update. It also can stop working because of unplanned events such as failures in underlying hardware, software, or network. [Failover and patching for Azure Cache for Redis](cache-failover.md) provides a detailed explanation on types of Redis failovers. An Azure Cache for Redis goes through many failovers during its lifetime. The design of the high availability architecture makes these changes inside a cache as transparent to its clients as possible.
-Also, Azure Cache for Redis provides more replica nodes in the Premium tier. A [multi-replica cache](cache-how-to-multi-replicas.md) can be configured with up to three replica nodes. Having more replicas generally improves resiliency because you have nodes backing up the primary. Even with more replicas, an Azure Cache for Redis instance still can be severely impacted by a datacenter- or AZ-level outage. You can increase cache availability by using multiple replicas with [zone redundancy](#zone-redundancy).
+Also, Azure Cache for Redis provides more replica nodes in the Premium tier. A [multi-replica cache](cache-how-to-multi-replicas.md) can be configured with up to three replica nodes. Having more replicas generally improves resiliency because you have nodes backing up the primary. Even with more replicas, an Azure Cache for Redis instance still can be severely impacted by a data center- or AZ-level outage. You can increase cache availability by using multiple replicas with [zone redundancy](#zone-redundancy).
## Zone redundancy
-Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates datacenter or AZ outage as a single point of failure and increases the overall availability of your cache.
+Azure Cache for Redis supports zone redundant configurations in the Premium and Enterprise tiers. A [zone redundant cache](cache-how-to-zone-redundancy.md) can place its nodes across different [Azure Availability Zones](../availability-zones/az-overview.md) in the same region. It eliminates data center or AZ outage as a single point of failure and increases the overall availability of your cache.
### Premium tier
azure-cache-for-redis Cache Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-overview.md
Previously updated : 02/02/2022 Last updated : 03/15/2022
-#Customer intent: As a developer, I want to understand what Azure Cache for Redis is and how it can improve performance in my application.
# About Azure Cache for Redis
Azure Cache for Redis improves application performance by supporting common appl
Azure Cache for Redis supports OSS Redis version 4.0.x and 6.0.x. We've made the decision to skip Redis 5.0 to bring you the latest version. Previously, Azure Cache for Redis maintained a single Redis version. In the future, it will provide a newer major release upgrade and at least one older stable version. You can [choose which version](cache-how-to-version.md) works the best for your application. - ## Service tiers Azure Cache for Redis is available in these tiers:
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
| Feature Description | Basic | Standard | Premium | Enterprise | Enterprise Flash | | - | :--: | :: | :: | :: | :: | | [Service Level Agreement (SLA)](https://azure.microsoft.com/support/legal/sla/cache/v1_0/) |-|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
-| Data encryption |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
+| Data encryption in transit |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö|
| [Network isolation](cache-how-to-premium-vnet.md) |Γ£ö|Γ£ö|Γ£ö|Γ£ö|Γ£ö| | [Scaling](cache-how-to-scale.md) |Γ£ö|Γ£ö|Γ£ö|-|-| | [OSS clustering](cache-how-to-premium-clustering.md) |-|-|Γ£ö|Γ£ö|Γ£ö|
The [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/
Consider the following options when choosing an Azure Cache for Redis tier:
-* **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
-* **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
-* **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
-* **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
-* **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache.
-* **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss.
-* **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
-* **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
-* **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redislabs.com/latest/modules/redisearch/), [RedisBloom](https://docs.redislabs.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redislabs.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
+- **Memory**: The Basic and Standard tiers offer 250 MB ΓÇô 53 GB; the Premium tier 6 GB - 1.2 TB; the Enterprise tiers 12 GB - 14 TB. To create a Premium tier cache larger than 120 GB, you can use Redis OSS clustering. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). For more information, see [How to configure clustering for a Premium Azure Cache for Redis](cache-how-to-premium-clustering.md).
+- **Performance**: Caches in the Premium and Enterprise tiers are deployed on hardware that has faster processors, giving better performance compared to the Basic or Standard tier. Premium tier Caches have higher throughput and lower latencies. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
+- **Dedicated core for Redis server**: All caches except C0 run dedicated VM cores. Redis, by design, uses only one thread for command processing. Azure Cache for Redis uses other cores for I/O processing. Having more cores improves throughput performance even though it may not produce linear scaling. Furthermore, larger VM sizes typically come with higher bandwidth limits than smaller ones. That helps you avoid network saturation, which will cause timeouts in your application.
+- **Network performance**: If you have a workload that requires high throughput, the Premium or Enterprise tier offers more bandwidth compared to Basic or Standard. Also within each tier, larger size caches have more bandwidth because of the underlying VM that hosts the cache. For more information, see [Azure Cache for Redis performance](./cache-planning-faq.yml#azure-cache-for-redis-performance).
+- **Maximum number of client connections**: The Premium and Enterprise tiers offer the maximum numbers of clients that can connect to Redis, offering higher numbers of connections for larger sized caches. Clustering increases the total amount of network bandwidth available for a clustered cache.
+- **High availability**: Azure Cache for Redis provides multiple [high availability](cache-high-availability.md) options. It guarantees that a Standard, Premium, or Enterprise cache is available according to our [SLA](https://azure.microsoft.com/support/legal/sla/cache/v1_0/). The SLA only covers connectivity to the cache endpoints. The SLA doesn't cover protection from data loss. We recommend using the Redis data persistence feature in the Premium and Enterprise tiers to increase resiliency against data loss.
+- **Data persistence**: The Premium and Enterprise tiers allow you to persist the cache data to an Azure Storage account and a Managed Disk respectively. Underlying infrastructure issues might result in potential data loss. We recommend using the Redis data persistence feature in these tiers to increase resiliency against data loss. Azure Cache for Redis offers both RDB and AOF (preview) options. Data persistence can be enabled through Azure portal and CLI. For the Premium tier, see [How to configure persistence for a Premium Azure Cache for Redis](cache-how-to-premium-persistence.md).
+- **Network isolation**: Azure Private Link and Virtual Network (VNET) deployments provide enhanced security and traffic isolation for your Azure Cache for Redis. VNET allows you to further restrict access through network access control policies. For more information, see [Azure Cache for Redis with Azure Private Link](cache-private-link.md) and [How to configure Virtual Network support for a Premium Azure Cache for Redis](cache-how-to-premium-vnet.md).
+- **Redis Modules**: Enterprise tiers support [RediSearch](https://docs.redislabs.com/latest/modules/redisearch/), [RedisBloom](https://docs.redislabs.com/latest/modules/redisbloom/) and [RedisTimeSeries](https://docs.redislabs.com/latest/modules/redistimeseries/). These modules add new data types and functionality to Redis.
You can scale your cache from the Basic tier up to Premium after it has been created. Scaling down to a lower tier isn't supported currently. For step-by-step scaling instructions, see [How to Scale Azure Cache for Redis](cache-how-to-scale.md) and [How to automate a scaling operation](cache-how-to-scale.md#how-to-automate-a-scaling-operation). ### Special considerations for Enterprise tiers The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis from Redis Labs. Customers obtain and pay for a license to this software through an Azure Marketplace offer. Azure Cache for Redis manages the license acquisition so that you won't have to do it separately. To purchase in the Azure Marketplace, you must have the following prerequisites:
-* Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions aren't supported.
-* Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).
-* If you use a private Marketplace, it must contain the Redis Labs Enterprise offer.
+
+- Your Azure subscription has a valid payment instrument. Azure credits or free MSDN subscriptions aren't supported.
+- Your organization allows [Azure Marketplace purchases](../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases).
+- If you use a private Marketplace, it must contain the Redis Labs Enterprise offer.
> [!IMPORTANT] > Azure Cache for Redis Enterprise requires standard network Load Balancers that are charged
The Enterprise tiers rely on Redis Enterprise, a commercial variant of Redis fro
## Next steps
-* [Create an open-source Redis cache](quickstart-create-redis.md)
-* [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
-* [Use Azure Cache for Redis in an ASP.NET web app](cache-web-app-howto.md)
-* [Use Azure Cache for Redis in .NET Core](cache-dotnet-core-quickstart.md)
-* [Use Azure Cache for Redis in .NET Framework](cache-dotnet-how-to-use-azure-redis-cache.md)
-* [Use Azure Cache for Redis in Node.js](cache-nodejs-get-started.md)
-* [Use Azure Cache for Redis in Java](cache-java-get-started.md)
-* [Use Azure Cache for Redis in Python](cache-python-get-started.md)
+- [Create an open-source Redis cache](quickstart-create-redis.md)
+- [Create a Redis Enterprise cache](quickstart-create-redis-enterprise.md)
+- [Use Azure Cache for Redis in an ASP.NET web app](cache-web-app-howto.md)
+- [Use Azure Cache for Redis in .NET Core](cache-dotnet-core-quickstart.md)
+- [Use Azure Cache for Redis in .NET Framework](cache-dotnet-how-to-use-azure-redis-cache.md)
+- [Use Azure Cache for Redis in Node.js](cache-nodejs-get-started.md)
+- [Use Azure Cache for Redis in Java](cache-java-get-started.md)
+- [Use Azure Cache for Redis in Python](cache-python-get-started.md)
azure-functions Functions Host Json V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-host-json-v1.md
Configuration settings for the [Azure Cosmos DB trigger and bindings](functions-
## eventHub
-Configuration settings for [Event Hub triggers and bindings](functions-bindings-event-hubs.md#functions-1x).
+Configuration settings for [Event Hub triggers and bindings](functions-bindings-event-hubs.md?tabs=functionsv1#hostjson-settings).
## functions
azure-functions Functions Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-premium-plan.md
To learn more about how scaling works, see [Event-driven scaling in Azure Functi
Azure Functions in a Consumption plan are limited to 10 minutes for a single execution. In the Premium plan, the run duration defaults to 30 minutes to prevent runaway executions. However, you can [modify the host.json configuration](./functions-host-json.md#functiontimeout) to make the duration unbounded for Premium plan apps. When set to an unbounded duration, your function app is guaranteed to run for at least 60 minutes.
+## Migration
+
+If you have an existing function app, you can use Azure CLI commands to migrate your app between a Consumption plan and a Premium plan on Windows. The specific commands depend on the direction of the migration. To learn more, see [Plan migration](functions-how-to-use-azure-function-app-settings.md#plan-migration).
+
+This migration isn't supported on Linux.
+ ## Plan and SKU settings When you create the plan, there are two plan size settings: the minimum number of instances (or plan size) and the maximum burst limit.
azure-maps Indoor Map Dynamic Styling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/indoor-map-dynamic-styling.md
The following script implements the mouse-click event. The code retrieves the fe
/* Upon a mouse click, log the feature properties to the browser's console. */ map.events.add("click", function(e){
- var features = map.layers.getRenderedShapes(e.position, "indoor");
+ var features = map.layers.getRenderedShapes(e.position, "unit");
features.forEach(function (feature) { if (feature.layer.id == 'indoor_unit_office') {
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
+
+ Title: Geocoding in Azure Maps Power BI visual
+
+description: In this article, you'll learn about geocoding in Azure Maps Power BI visual.
++ Last updated : 03/16/2022+++++
+# Geocoding in Azure Maps Power BI Visual
+
+Azure Maps uses the latitude and longitude coordinate system to locate places on the map. The Azure Maps Power BI Visual provides latitude and longitude fields to pinpoint a specific location on the map, however most data sources use an address to pinpoint a location as opposed to latitude and longitude values.
+
+The Azure Maps Power BI Visual now provides a **Location** field that accepts address values that can be used to pinpoint a location on the map using geocoding.
+
+Geocoding is the process of taking an address and returning the corresponding latitude/longitude coordinate. The address determines the granularity it's possible to geocode, such as a city as opposed to a specific street address.
++
+## The location field
+
+The **Location** field in the Azure Maps Power BI Visual can accept multiple values, such as country, region, state, city, street address and zip code. By providing multiple sources of location information in the Location field, you help to guarantee more accurate results and eliminate ambiguity that would prevent a specific location to be determined. For example, there are over 20 different cities in the United States named *Franklin*.
+
+## Use geo-hierarchies to drill down
+
+When entering multiple values into the **Location** field, you create a geo-hierarchy. Geo-hierarchies enable the hierarchical drill-down features in the map, allowing you to drill down to different "levels" of location.
++
+| Button | Description |
+|:-:|-|
+| 1 | The drill button on the far right, called Drill Mode, allows you to select a map Location and drill down into that specific location one level at a time. For example, if you turn on the drill-down option and select North America, you move down in the hierarchy to the next level--states in North America. For geocoding, Power BI sends Azure Maps country and state data for North America only. The button on the left goes back up one level. |
+| 2 | The double arrow drills to the next level of the hierarchy for all locations at once. For example, if you're currently looking at countries and then use this option to move to the next level, states, Power BI displays state data for all countries. For geocoding, Power BI sends Azure Maps state data (no country data) for all locations. This option is useful if each level of your hierarchy is unrelated to the level above it. |
+| 3 | Similar to the drill-down option, except that you don't need to click on the map. It expands down to the next level of the hierarchy remembering the current level's context. For example, if you're currently looking at countries and select this icon, you move down in the hierarchy to the next level--states. For geocoding, Power BI sends data for each state and its corresponding country to help Azure Maps geocode more accurately. In most maps, you'll either use this option or the drill-down option on the far right. This will send Azure as much information as possible and result in more accurate location information. |
+
+## Categorize geographic fields in Power BI
+
+To ensure fields are correctly geocoded, you can set the Data Category on the data fields in Power BI. In Data view, select the desired column. From the ribbon, select the Modeling tab and then set the Data Category to one of the following: Address, City, Continent, Country, Region, County, Postal Code, State, or Province. These data categories help Azure correctly encode the data. To learn more, see [Data categorization in Power BI Desktop](/power-bi/transform-model/desktop-data-categorization). If you're live connecting to SQL Server Analysis Services, you'll need to set the data categorization outside of Power BI using [SQL Server Data Tools (SSDT)](/sql/ssdt/download-sql-server-data-tools-ssdt).
++
+## Next steps
+
+Learn more about the Azure Maps Power BI visual:
+
+> [!div class="nextstepaction"]
+> [Get started with Azure Maps Power BI visual (Preview)](power-bi-visual-get-started.md)
+
+> [!div class="nextstepaction"]
+> [Understanding layers in the Azure Maps Power BI visual](power-bi-visual-understanding-layers.md)
+
+Learn about the Azure Maps Power BI visual Pie Chart layer that uses geocoding:
+
+> [!div class="nextstepaction"]
+> [Add a pie chart layer](power-bi-visual-add-pie-chart-layer.md)
azure-monitor Transaction Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/transaction-diagnostics.md
This is by design. All of the related items, across all components, are already
*I see more events than expected in the transaction diagnostics experience when using the Application Insights JavaScript SDK. Is there a way to see fewer events per transaction?* The transaction diagnostics experience shows all telemetry in a [single operation](correlation.md#data-model-for-telemetry-correlation) that share an [Operation Id](data-model-context.md#operation-id). By default, the Application Insights SDK for JavaScript creates a new operation for each unique page view. In a Single Page Application (SPA), only one page view event will be generated and a single Operation Id will be used for all telemetry generated, this can result in many events being correlated to the same operation. In these scenarios, you can use Automatic Route Tracking to automatically create new operations for navigation in your single page app. You must turn on [enableAutoRouteTracking](javascript.md#single-page-applications) so a page view is generated every time the URL route is updated (logical page view occurs). If you want to manually refresh the Operation Id, you can do so by calling `appInsights.properties.context.telemetryTrace.traceID = Microsoft.ApplicationInsights.Telemetry.Util.generateW3CId()`. Manually triggering a PageView event will also reset the Operation Id.+
+*Why do transaction detail durations not add up to the top-request duration?*
+
+Time not explained in the gantt chart, is time that is not covered by a tracked dependency.
+This can be due to either external calls that were not instrumented (automatically or manually), or that the time taken was in process rather than because of an external call.
+
+If all calls were instrumented, in process is the likely root cause for the time spent. A useful tool for diagnosing the process is the [Application Insights profiler](./profiler.md).
azure-monitor Log Analytics Workspace Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-overview.md
The following table summarizes the differences between the plans.
|:|:|:| | Ingestion | Cost for ingestion. | Reduced cost for ingestion. | | Log queries | No additional cost. Full query language. | Additional cost. Subset of query language. |
-| Retention | Configure retention from 30 days to 750 days. | Retention fixed at 8 days. |
+| Retention | Configure retention from 30 days to 730 days. | Retention fixed at 8 days. |
| Alerts | Supported. | Not supported. | ## Ingestion-time transformations
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Last updated 02/09/2022
# Log Analytics workspace data export in Azure Monitor
-Data export in Log Analytics workspace lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it's collected. This article provides details on this feature and steps to configure data export in your workspaces.
+Data export in Log Analytics workspace lets you continuously export data per selected tables in your workspace, to an Azure Storage Account or Azure Event Hubs as it arrives to Azure Monitor pipeline. This article provides details on this feature and steps to configure data export in your workspaces.
## Overview Data in Log Analytics is available for the retention period defined in your workspace, and used in various experiences provided in Azure Monitor and Azure services. There are cases where you need to use other tools: * Tamper protected store compliance ΓÇô data can't be altered in Log Analytics once ingested, but can be purged. Export to Storage Account set with [immutability policies](../../storage/blobs/immutable-policy-configure-version-scope.md) to keep data tamper protected.
-* Integration with Azure services and other tools ΓÇô export to Event Hubs in near-real-time to send data to your services and tools at it arrives to Azure Monitor.
+* Integration with Azure services and other tools ΓÇô export to Event Hubs as it arrives and processed in Azure Monitor.
* Keep audit and security data for very long time ΓÇô export to Storage Account in the workspace's region, or replicate data to other regions using any of the [Azure Storage redundancy options](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region) including "GRS" and "GZRS". After configuring data export rules in Log Analytics workspace, new data for tables in rules is exported from Azure Monitor pipeline to your Storage Account or Event Hubs as it arrives.
Data is exported without a filter. For example, when you configure a data export
## Other export options Log Analytics workspace data export continuously exports data that is sent to your Log Analytics workspace. There are other options to export data for particular scenarios:
+- Configure Diagnostic Settings in Azure resources. Logs is sent to destination directly and has lower latency compared to data export in Log Analytics.
- Scheduled export from a log query using a Logic App. This is similar to the data export feature but allows you to send filtered or aggregated data to Azure Storage Account. This method though is subject to [log query limits](../service-limits.md#log-analytics-workspaces), see [Archive data from Log Analytics workspace to Azure Storage Account using Logic App](logs-export-logic-app.md). - One time export to local machine using PowerShell script. See [Invoke-AzOperationalInsightsQueryExport](https://www.powershellgallery.com/packages/Invoke-AzOperationalInsightsQueryExport).
azure-netapp-files Create Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-active-directory-connections.md
Several features of Azure NetApp Files require that you have an Active Directory
The Network Security Groups (NSGs) and firewalls must have appropriately configured rules to allow for Active Directory and DNS traffic requests.
-* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (ADDS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
+* The Azure NetApp Files delegated subnet must be able to reach all Active Directory Domain Services (AD DS) domain controllers in the domain, including all local and remote domain controllers. Otherwise, service interruption can occur.
If you have domain controllers that are unreachable by the Azure NetApp Files delegated subnet, you can specify an Active Directory site during creation of the Active Directory connection. Azure NetApp Files needs to communicate only with domain controllers in the site where the Azure NetApp Files delegated subnet address space is.
Several features of Azure NetApp Files require that you have an Active Directory
## Decide which Domain Services to use
-Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (ADDS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use ADDS or AADDS.
+Azure NetApp Files supports both [Active Directory Domain Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) (AD DS) and Azure Active Directory Domain Services (AADDS) for AD connections. Before you create an AD connection, you need to decide whether to use AD DS or AADDS.
For more information, see [Compare self-managed Active Directory Domain Services, Azure Active Directory, and managed Azure Active Directory Domain Services](../active-directory-domain-services/compare-identity-solutions.md). ### Active Directory Domain Services
-You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (ADDS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that are not in the specified Active Directory Sites and Services site.
+You can use your preferred [Active Directory Sites and Services](/windows-server/identity/ad-ds/plan/understanding-active-directory-site-topology) scope for Azure NetApp Files. This option enables reads and writes to Active Directory Domain Services (AD DS) domain controllers that are [accessible by Azure NetApp Files](azure-netapp-files-network-topologies.md). It also prevents the service from communicating with domain controllers that are not in the specified Active Directory Sites and Services site.
-To find your site name when you use ADDS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
+To find your site name when you use AD DS, you can contact the administrative group in your organization that is responsible for Active Directory Domain Services. The example below shows the Active Directory Sites and Services plugin where the site name is displayed:
![Active Directory Sites and Services](../media/azure-netapp-files/azure-netapp-files-active-directory-sites-services.png)
This setting is configured in the **Active Directory Connections** under **NetAp
![Join Active Directory](../media/azure-netapp-files/azure-netapp-files-join-active-directory.png)
- * **AES Encryption**
+ * <a name="aes-encryption"></a>**AES Encryption**
Select this checkbox if you want to enable AES encryption for AD authentication or if you require [encryption for SMB volumes](azure-netapp-files-create-volumes-smb.md#add-an-smb-volume). See [Requirements for Active Directory connections](#requirements-for-active-directory-connections) for requirements. ![Active Directory AES encryption](../media/azure-netapp-files/active-directory-aes-encryption.png)
- The **AES Encryption** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
- ```
+ * <a name="encrypted-smb-connection"></a>**Encrypted SMB connection to domain controller**
- Check the status of the feature registration:
+ Select this checkbox to enable SMB encryption for communication between the Azure NetApp Files service and the domain controller (DC). When you enable this functionality, SMB3 protocol will be used for encrypted DC connections, because encryption is supported only by SMB3. SMB, Kerberos, and LDAP enabled volume creation will fail if the DC doesn't support the SMB3 protocol.
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFAesEncryption
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ ![Snapshot that shows the option for encrypted SMB connection to domain controller.](../media/azure-netapp-files/encrypted-smb-domain-controller.png)
* **LDAP Signing** Select this checkbox to enable LDAP signing. This functionality enables secure LDAP lookups between the Azure NetApp Files service and the user-specified [Active Directory Domain Services domain controllers](/windows/win32/ad/active-directory-domain-services). For more information, see [ADV190023 | Microsoft Guidance for Enabling LDAP Channel Binding and LDAP Signing](https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/ADV190023).
This setting is configured in the **Active Directory Connections** under **NetAp
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ++ * **LDAP over TLS**
- See [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) for information about this option.
+ See [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) for information about this option.
* **LDAP Search Scope**, **User DN**, **Group DN**, and **Group Membership Filter**
- See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
+ See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md#ldap-search-scope) for information about these options.
* **Security privilege users** <!-- SMB CA share feature --> You can grant security privilege (`SeSecurityPrivilege`) to AD users or groups that require elevated privilege to access the Azure NetApp Files volumes. The specified AD users or groups will be allowed to perform certain actions on Azure NetApp Files SMB shares that require security privilege not assigned by default to domain users.
This setting is configured in the **Active Directory Connections** under **NetAp
![Screenshot showing the Security privilege users box of Active Directory connections window.](../media/azure-netapp-files/security-privilege-users.png)
- * **Backup policy users**
+ * <a name="backup-policy-users"></a>**Backup policy users**
You can grant additional security privileges to AD users or groups that require elevated backup privileges to access the Azure NetApp Files volumes. The specified AD user accounts or groups will have elevated NTFS permissions at the file or folder level. For example, you can specify a non-privileged service account used for backing up, restoring, or migrating data to an SMB file share in Azure NetApp Files. The following privileges apply when you use the **Backup policy users** setting:
This setting is configured in the **Active Directory Connections** under **NetAp
| `SeRestorePrivilege` | Restore files and directories, overriding any ACLs. <br> Set any valid user or group SID as the file owner. | | `SeChangeNotifyPrivilege` | Bypass traverse checking. <br> Users with this privilege are not required to have traverse (`x`) permissions to traverse folders or symlinks. |
- ![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
-
- The **Backup policy users** feature is currently in preview. If this is your first time using this feature, register the feature before using it:
-
- ```azurepowershell-interactive
- Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
- ```
-
- Check the status of the feature registration:
-
- > [!NOTE]
- > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is `Registered` before continuing.
-
- ```azurepowershell-interactive
- Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFBackupOperator
- ```
-
- You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+ ![Active Directory backup policy users](../media/azure-netapp-files/active-directory-backup-policy-users.png)
* **Administrators privilege users**
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
* [Create a dual-protocol volume](create-volumes-dual-protocol.md) * [Configure NFSv4.1 Kerberos encryption](configure-kerberos-encryption.md) * [Install a new Active Directory forest using Azure CLI](/windows-server/identity/ad-ds/deploy/virtual-dc/adds-on-azure-vm)
-* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
-* [ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+* [AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md)
azure-netapp-files Modify Active Directory Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/modify-active-directory-connections.md
Title: Modify an Active Directory Connection for Azure NetApp Files | Microsoft Docs description: This article shows you how to modify Active Directory connections for Azure NetApp Files.- -- Last updated 03/15/2022
Once you have [created an Active Directory connection](create-active-directory-c
| Username | Username of the Active Directory domain administrator | Yes | None* | Credential change to contact DC | | Password | Password of the Active Directory domain administrator | Yes | None* | Credential change to contact DC | | Kerberos Realm: AD Server Name | The name of the Active Directory machine. This option is only used when creating a Kerberos volume. | Yes | None* | |
-| Kerberos Realm: KDC IP | Specifies the IP address of the Kerberos Distribution Center (KDC) server. KDC in Azure NetApp Files is an Active Directory server | Yes | None | A new KDC IP address will be used | None* |
+| Kerberos Realm: KDC IP | Specifies the IP address of the Kerberos Distribution Center (KDC) server. KDC in Azure NetApp Files is an Active Directory server | Yes | None | A new KDC IP address will be used |
| Region | The region where the Active Directory credentials are associated | No | None | N/A | | User DN | User domain name, which overrides the base DN for user lookups Nested userDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | User search scope gets limited to User DN instead of base DN. | | Group DN | Group domain name. groupDN overrides the base DN for group lookups. Nested groupDN can be specified in `OU=subdirectory, OU=directory, DC=domain, DC=com` format.​ | Yes | None* | Group search scope gets limited to Group DN instead of base DN. |
azure-netapp-files Troubleshoot Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/troubleshoot-volumes.md
na Previously updated : 01/21/2022 Last updated : 03/15/2022 # Troubleshoot volume errors for Azure NetApp Files This article describes error messages and resolutions that can help you troubleshoot Azure NetApp Files volumes.
+## General errors for volume creation or management
+
+| Error conditions | Resolutions |
+|-|-|
+| Error during SMB, LDAP, or Kerberos volume creation: <br> `Failed to create the Active Directory machine account "PAKA-5755". Reason: SecD Error: no server available Details: Error: Machine account creation procedure failed [ 34] Loaded the preliminary configuration. [ 80] Created a machine account in the domain [ 81] Successfully connected to ip 10.193.169.25, port 445 using TCP [ 83] Unable to connect to LSA service on win-2bovaekb44b.harikrb.com (Error: RESULT_ERROR_SPINCLIENT_SOCKET_RECEIVE_ERROR) [ 83] No servers available for MS_LSA, vserver: 251, domain: http://contoso.com/. **[ 83] FAILURE: Unable to make a connection (LSA:CONTOSO.COM), ** result: 6940 [ 85] Could not find Windows SID 'S-1-5-21-192389270-1514950320-2551433173-512' [ 133] Deleted existing account 'CN=PAKA-5755,CN=Computers,DC=contoso,DC=com' .` | SMB3 is disabled on the domain controller. <br> Enable SMB3 on the domain controller and then try creating the volume. See [How to detect, enable and disable SMBv1, SMBv2, and SMBv3 in Windows](/windows-server/storage/file-server/troubleshoot/detect-enable-and-disable-smbv1-v2-v3) for details about enabling SMB3. |
+ ## Errors for SMB and dual-protocol volumes | Error conditions | Resolutions | |-|-|
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if ADDS and the volume are being deployed in same region.</li> <li>Check if ADDS and the volume are using the same VNet. If they are using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure ADDS. Azure ADDS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine accounts. </li> <li> If you use Azure ADDS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available."}]}` | This error indicates that the DNS is not reachable. <br> Consider the following solutions: <ul><li>Check if AD DS and the volume are being deployed in same region.</li> <li>Check if AD DS and the volume are using the same VNet. If they are using different VNETs, make sure that the VNets are peered with each other. See [Guidelines for Azure NetApp Files network planning](azure-netapp-files-network-topologies.md). </li> <li>The DNS server might have network security groups (NSGs) applied. As such, it does not allow the traffic to flow. In this case, open the NSGs to the DNS or AD to connect to various ports. For port requirements, see [Requirements for Active Directory connections](create-active-directory-connections.md#requirements-for-active-directory-connections). </li></ul> <br>The same solutions apply for Azure AD DS. Azure AD DS should be deployed in the same region. The VNet should be in the same region or peered with the VNet used by the volume. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-C1C8\". Reason: Kerberos Error: Invalid credentials were given Details: Error: Machine account creation procedure failed\n [ 563] Loaded the preliminary configuration.\n**[ 670] FAILURE: Could not authenticate as 'test@contoso.com':\n** Unknown user (KRB5KDC_ERR_C_PRINCIPAL_UNKNOWN)\n. "}]}` | <ul><li>Make sure that the username entered is correct. </li> <li>Make sure that the user is part of the Administrator group that has the privilege to create machine accounts. </li> <li> If you use Azure AD DS, make sure that the user is part of the Azure AD group `Azure AD DC Administrators`. </li></ul> |
| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError", "message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-A452\". Reason: Kerberos Error: Pre-authentication information was invalid Details: Error: Machine account creation procedure failed\n [ 567] Loaded the preliminary configuration.\n [ 671] Successfully connected to ip 10.x.x.x, port 88 using TCP\n**[ 1099] FAILURE: Could not authenticate as\n** 'user@contoso.com': CIFS server account password does\n** not match password stored in Active Directory\n** (KRB5KDC_ERR_PREAUTH_FAILED)\n. "}]}` | Make sure that the password entered for joining the AD connection is correct. |
-| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure ADDS, make sure that the organizational unit path is `OU=AADDC Computers`. |
+| The SMB or dual-protocol volume creation fails with the following error: <br> `{"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/DeployOperations for usage details.","details":[{"code":"InternalServerError","message":"Error when creating - Failed to create the Active Directory machine account \"SMBTESTAD-D9A2\". Reason: SecD Error: ou not found Details: Error: Machine account creation procedure failed\n [ 561] Loaded the preliminary configuration.\n [ 665] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ 1039] Successfully connected to ip 10.x.x.x, port 389 using TCP\n**[ 1147] FAILURE: Specifed OU 'OU=AADDC Com' does not exist in\n** contoso.com\n. "}]}` | Make sure that the OU path specified for joining the AD connection is correct. If you use Azure AD DS, make sure that the organizational unit path is `OU=AADDC Computers`. |
| The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL. Reason: LDAP Error: Local error occurred Details: Error: Machine account creation procedure failed. [nnn] Loaded the preliminary configuration. [nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn] Successfully connected to ip 10.x.x.x, port 389 using [nnn] Entry for host-address: 10.x.x.x not found in the current source: FILES. Ignoring and trying next available source [nnn] Source: DNS unavailable. Entry for host-address:10.x.x.x found in any of the available sources\n*[nnn] FAILURE: Unable to SASL bind to LDAP server using GSSAPI: local error [nnn] Additional info: SASL(-1): generic failure: GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Cannot determine realm for numeric host address) [nnn] Unable to connect to LDAP (Active Directory) service on contoso.com (Error: Local error) [nnn] Unable to make a connection (LDAP (Active Directory):contosa.com, result: 7643. ` | The pointer (PTR) record of the AD host machine might be missing on the DNS server. You need to create a reverse lookup zone on the DNS server, and then add a PTR record of the AD host machine in that reverse lookup zone. <br> For example, assume that the IP address of the AD machine is `10.x.x.x`, the hostname of the AD machine (as found by using the `hostname` command) is `AD1`, and the domain name is `contoso.com`. The PTR record added to the reverse lookup zone should be `10.x.x.x` -> `contoso.com`. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-ANF-VOL\". Reason: Kerberos Error: KDC has no support for encryption type Details: Error: Machine account creation procedure failed [nnn]Loaded the preliminary configuration. [nnn]Successfully connected to ip 10.x.x.x, port 88 using TCP [nnn]FAILURE: Could not authenticate as 'contosa.com': KDC has no support for encryption type (KRB5KDC_ERR_ETYPE_NOSUPP) ` | Make sure that [AES Encryption](./create-active-directory-connections.md#create-an-active-directory-connection) is enabled both in the Active Directory connection and for the service account. | | The SMB or dual-protocol volume creation fails with the following error: <br> `Failed to create the Active Directory machine account \"SMB-NTAP-VOL\". Reason: LDAP Error: Strong authentication is required Details: Error: Machine account creation procedure failed\n [ 338] Loaded the preliminary configuration.\n [ nnn] Successfully connected to ip 10.x.x.x, port 88 using TCP\n [ nnn ] Successfully connected to ip 10.x.x.x, port 389 using TCP\n [ 765] Unable to connect to LDAP (Active Directory) service on\n dc51.area51.com (Error: Strong(er) authentication\n required)\n*[ nnn] FAILURE: Unable to make a connection (LDAP (Active\n* Directory):contoso.com), result: 7609\n. "` | The LDAP Signing option is not selected, but the AD client has LDAP signing. [Enable LDAP Signing](create-active-directory-connections.md#create-an-active-directory-connection) and retry. |
This article describes error messages and resolutions that can help you troubles
| Error conditions | Resolutions | |-|-| | Error when creating an SMB volume with ldapEnabled as true: <br> `Error Message: ldapEnabled option is only supported with NFS protocol volume. ` | You cannot create an SMB volume with LDAP enabled. <br> Create SMB volumes with LDAP disabled. |
-| Error when updating the ldapEnabled parameter value for an existing volume: <br> `Error Message: ldapEnabled parameter is not allowed to update` | You cannot modify the LDAP option setting after creating a volume. <br> Do not update the LDAP option setting on a created volume. See [Configure ADDS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. |
+| Error when updating the ldapEnabled parameter value for an existing volume: <br> `Error Message: ldapEnabled parameter is not allowed to update` | You cannot modify the LDAP option setting after creating a volume. <br> Do not update the LDAP option setting on a created volume. See [Configure AD DS LDAP with extended groups for NFS volume access](configure-ldap-extended-groups.md) for details. |
| Error when creating an LDAP-enabled NFS volume: <br> `Could not query DNS server` <br> `Sample error message:` <br> `"log": time="2020-10-21 05:04:04.300" level=info msg=Res method=GET url=/v2/Volumes/070d0d72-d82c-c893-8ce3-17894e56cea3 x-correlation-id=9bb9e9fe-abb6-4eb5-a1e4-9e5fbb838813 x-request-id=c8032cb4-2453-05a9-6d61-31ca4a922d85 xresp="200: {\"created\":\"2020-10-21T05:02:55.000Z\",\"lifeCycleState\":\"error\",\"lifeCycleStateDetails\":\"Error when creating - Could not query DNS server. Verify that the network configuration is correct and that DNS servers are available.\",\"name\":\"smb1\",\"ownerId\ \":\"8c925a51-b913-11e9-b0de-9af5941b8ed0\",\"region\":\"westus2stage\",\"volumeId\":\"070d0d72-d82c-c893-8ce3-` | This error occurs because DNS is unreachable. <br> <ul><li> Check if you have configured the correct site (site scoping) for Azure NetApp Files. </li><li> The reason that DNS is unreachable might be an incorrect DNS IP address or networking issues. Check the DNS IP address entered in the AD connection to make sure that it is correct. </li><li> Make sure that the AD and the volume are in the same region and the same VNet. If they are in different VNets, ensure that VNet peering is established between the two VNets.</li></ul> | | Error when creating volume from a snapshot: <br> `Aggregate does not exist` | Azure NetApp Files doesnΓÇÖt support provisioning a new, LDAP-enabled volume from a snapshot that belongs to an LDAP-disabled volume. <br> Try creating new an LDAP-disabled volume from the given snapshot. |
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
na Previously updated : 03/02/2022 Last updated : 03/15/2022
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
+## March 2022
+
+* [Encrypted SMB connection to domain controller](create-active-directory-connections.md#encrypted-smb-connection)
+
+ You can now enable SMB encryption for communication between the Azure NetApp Files service and the Active Directory Domain Services domain controller (DC). When you enable this functionality, SMB3 protocol will be used for encrypted DC connections.
+
+* Features that are now generally available (GA)
+
+ The following features are now GA. You no longer need to register the features before using them.
+ * [Backup policy users](create-active-directory-connections.md#backup-policy-users)
+ * [AES encryption for AD authentication](create-active-directory-connections.md#aes-encryption)
+ ## January 2022 * [Azure Application Consistent Snapshot Tool (AzAcSnap) v5.1 Public Preview](azacsnap-release-notes.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
You might be using the Unix security style with a dual-protocol volume or Lightweight Directory Access Protocol (LDAP) with extended groups features in combination with large LDAP topologies. In this case, you might encounter "access denied" errors on Linux clients when interacting with such Azure NetApp Files volumes. You can now use the **LDAP Search Scope** option to specify the LDAP search scope to avoid "access denied" errors.
-* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
+* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) now generally available (GA)
- The ADDS LDAP user-mapping with NFS extended groups feature is now generally available. You no longer need to register the feature before using it.
+ The AD DS LDAP user-mapping with NFS extended groups feature is now generally available. You no longer need to register the feature before using it.
## December 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
The following features are now GA. You no longer need to register the features before using them. * [Dual-protocol (NFSv4.1 and SMB) volume](create-volumes-dual-protocol.md)
- * [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
+ * [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md)
* [SMB3 Protocol Encryption](azure-netapp-files-create-volumes-smb.md#smb3-encryption) ## November 2021
Azure NetApp Files is updated regularly. This article provides a summary about t
Azure NetApp Files now supports billing tags to help you cross-reference cost with business units or other internal consumers. Billing tags are assigned at the capacity pool level and not volume level, and they appear on the customer invoice.
-* [Enable Active Directory Domain Services (ADDS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
+* [Enable Active Directory Domain Services (AD DS) LDAP authentication for NFS volumes](configure-ldap-over-tls.md) (Preview)
- By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (ADDS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
+ By default, LDAP communications between client and server applications are not encrypted. This means that it is possible to use a network-monitoring device or software to view the communications between an LDAP client and server computers. This scenario might be problematic in non-isolated or shared VNets when an LDAP simple bind is used, because the credentials (username and password) used to bind the LDAP client to the LDAP server are passed over the network unencrypted. LDAP over TLS (also known as LDAPS) is a protocol that uses TLS to secure communication between LDAP clients and LDAP servers. Azure NetApp Files now supports the secure communication between an Active Directory Domain Server (AD DS) using LDAP over TLS. Azure NetApp Files can now use LDAP over TLS for setting up authenticated sessions between the Active Directory-integrated LDAP servers. You can enable the LDAP over TLS feature for NFS, SMB, and dual-protocol volumes. By default, LDAP over TLS is disabled on Azure NetApp Files.
* Support for throughput [metrics](azure-netapp-files-metrics.md)
Azure NetApp Files is updated regularly. This article provides a summary about t
You can now enable SMB3 Protocol Encryption on Azure NetApp Files SMB and dual-protocol volumes. This feature enables encryption for in-flight SMB3 data, using the [AES-CCM algorithm on SMB 3.0, and the AES-GCM algorithm on SMB 3.1.1](/windows-server/storage/file-server/file-server-smb-overview#features-added-in-smb-311-with-windows-server-2016-and-windows-10-version-1607) connections. SMB clients not using SMB3 encryption will not be able to access this volume. Data at rest is encrypted regardless of this setting. SMB encryption further enhances security. However, it might impact the client (CPU overhead for encrypting and decrypting messages). It might also impact storage resource utilization (reductions in throughput). You should test the encryption performance impact against your applications before deploying workloads into production.
-* [Active Directory Domain Services (ADDS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
+* [Active Directory Domain Services (AD DS) LDAP user-mapping with NFS extended groups](configure-ldap-extended-groups.md) (Preview)
- By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to ADDS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
+ By default, Azure NetApp Files supports up to 16 group IDs when handling NFS user credentials, as defined in [RFC 5531](https://tools.ietf.org/html/rfc5531). With this new capability, you can now increase the maximum up to 1,024 if you have users who are members of more than the default number of groups. To support this capability, NFS volumes can now also be added to AD DS LDAP, which enables Active Directory LDAP users with extended groups entries (with up to 1024 groups) to access the volume.
## March 2021
azure-sql Winauth Azuread Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql/managed-instance/winauth-azuread-overview.md
Last updated 03/01/2022
## Key capabilities and scenarios
-As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](/azure/azure-sql/database/authentication-aad-overview.md) options:
+As customers modernize their infrastructure, application, and data tiers, they also modernize their identity management capabilities by shifting to Azure AD. Azure SQL offers multiple [Azure AD Authentication](../database/authentication-aad-overview.md) options:
- 'Azure Active Directory - Password' offers authentication with Azure AD credentials - 'Azure Active Directory - Universal with MFA' adds multi-factor authentication
bastion Bastion Vm Full Screen https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-vm-full-screen.md
Title: 'Azure Bastion: View virtual machine session: full screen'
+ Title: 'View virtual machine session full screen in browser'
+ description: Learn how to change the virtual machine view to full screen and back in your browser for an RDP or SSH connection in Azure Bastion.
-# Change to full screen view for a VM session: Azure Bastion
+# Change to full screen view for a VM session
-This article helps you change the virtual machine view to full screen and back in your browser. Before you work with a VM, make sure you have followed the steps to [Create a Bastion host](./tutorial-create-host-portal.md). Then, connect to the VM that you want to work with using either [RDP](bastion-connect-vm-rdp-windows.md) or [SSH](bastion-connect-vm-ssh-linux.md).
+This article helps you change the virtual machine view to full screen and back in your browser when connected to a VM using Azure Bastion.
## Launch the clipboard tool
bastion Connect Native Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/connect-native-client-windows.md
# Connect to a VM using a native client (Preview)
-This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using a native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client.
+This article helps you configure your Bastion deployment, and then connect to a VM in the VNet using the native client (SSH or RDP) on your local computer. The native client feature lets you connect to your target VMs via Bastion using Azure CLI, and expands your sign-in options to include local SSH key pair and Azure Active Directory (Azure AD). Additionally with this feature, you can now also upload or download files, depending on the connection type and client.
Your capabilities on the VM when connecting via a native client are dependent on what is enabled on the native client. Controlling access to features such as file transfer via Bastion isn't supported.
bastion Vm About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-about.md
+
+ Title: 'About VM connections and features'
+
+description: Learn about VM connections and features when connecting using Azure Bastion.
+++ Last updated : 03/16/2022++++
+# About VM connections and features
+
+The sections in this article show you various features and settings that are available when you connect to a VM using Azure Bastion.
+
+## <a name="connect"></a>Connect to a VM
+
+You can use a variety of different methods to connect to a target VM. Some connection types require Bastion to be configured with the Standard SKU. Use the following articles to connect.
++
+## <a name="copy-paste"></a>Copy and paste
+
+For browsers that support the advanced Clipboard API access, you can copy and paste text between your local device and the remote session in the same way you copy and paste between applications on your local device. For other browsers, you can use the Bastion clipboard access tool palette. Only text copy/paste is supported.
+
+For steps and more information, see [Copy and paste - Windows VMs](bastion-vm-copy-paste.md).
+
+## <a name="full-screen"></a>Full screen view
+
+You can change to full screen view and back using your browser. For steps and more information, see [Change to full screen view](bastion-vm-full-screen.md).
+
+## <a name="upload-download"></a>Upload or download files
+
+Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. It may also be possible to use certain third-party clients and tools to upload and download files.
+
+For steps and more information, see [Upload or download files to a VM using a native client](vm-upload-download-native.md).
+
+## <a name="audio"></a>Remote audio
+
+You can enable remote audio output for your VM. Some VMs automatically enable this setting, others require you to enable audio settings manually. The settings are changed on the VM itself. Your Bastion deployment doesn't need any special configuration settings to enable remote audio output.
+
+For steps, see the [Deploy Bastion](tutorial-create-host-portal.md#audio) tutorial.
+
+## Next steps
+
+For frequently asked questions, see the VM section of the [Azure Bastion FAQ](bastion-faq.md).
bastion Vm Upload Download Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/vm-upload-download-native.md
Title: 'Upload or download files - native client'
+ Title: 'Upload or download files using a native client connection'
-description: Learn how to upload or download files using Azure Bastion and a native client.
+description: Learn how to upload or download files using Azure Bastion and a native client when connected to a VM using Azure Bastion.
-# Upload or download files using the native client (Preview)
+# File upload and download to a VM using a native client (Preview)
Azure Bastion offers support for file transfer between your target VM and local computer using Bastion and a native RDP or native SSH client. To learn more about native client support, refer to [Connect to a VM using the native client](connect-native-client-windows.md). While it may be possible to use third-party clients and tools to upload or download files, this article focuses on working with supported native clients.
cognitive-services Luis Get Started Get Intent From Browser https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/LUIS/luis-get-started-get-intent-from-browser.md
In order to query a public app, you need:
#### [V3 prediction endpoint](#tab/V3-3-1)
- Add `show-all-intents=true` to the end of the querystring to **show all intents**, and `verbose=true' to return all detailed information for entities.
+ Add `show-all-intents=true` to the end of the querystring to **show all intents**, and `verbose=true` to return all detailed information for entities.
` https://YOUR-LUIS-ENDPOINT-SUBDOMAIN.api.cognitive.microsoft.com/luis/prediction/v3.0/apps/df67dcdb-c37d-46af-88e1-8b97951ca1c2/slots/production/predict?query=turn on all lights&subscription-key=YOUR-LUIS-PREDICTION-KEY&show-all-intents=true&verbose=true
cognitive-services How To Migrate To Custom Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-custom-neural-voice.md
The custom neural voice lets you build higher-quality voice models while requiring less data. You can develop more realistic, natural, and conversational voices. Your customers and end users will benefit from the latest Text-to-Speech technology, in a responsible way.
-|Custom voice |Custom neural voice |
+|Custom voice |Custom neural voice |
|--|--| | The standard, or "traditional," method of custom voice breaks down spoken language into phonetic snippets that can be remixed and matched using classical programming or statistical methods. | Custom neural voice synthesizes speech using deep neural networks that have "learned" the way phonetics are combined in natural human speech rather than using classical programming or statistical methods.|
-| Custom voice requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model will tend to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
+| Custom voice<sup>1</sup> requires a large volume of voice data to produce a more human-like voice model. With fewer recorded lines, a standard custom voice model will tend to sound more obviously robotic. |The custom neural voice capability enables you to create a unique brand voice in multiple languages and styles by using a small set of recordings.|
+
+<sup>1</sup> When creating a custom voice model, the maximum number of data files allowed to be imported per subscription is 10 .zip files for free subscription (F0) users, and 500 for standard subscription (S0) users.
## Action required
Before you can migrate to custom neural voice, your [application](https://aka.ms
3. After the custom neural voice model is created, deploy the voice model to a new endpoint. To create a new custom voice endpoint with your neural voice model, go to **Text-to-Speech > Custom Voice > Deploy model**. Select **Deploy models** and enter a **Name** and **Description** for your custom endpoint. Then select the custom neural voice model you would like to associate with this endpoint and confirm the deployment. 4. Update your code in your apps if you have created a new endpoint with a new model.
-## Custom voice details (retired)
+## Custom voice details (deprecated)
Read the following sections for details on custom voice.
If you've created a custom voice font, use the endpoint that you've created. You
| West US | `https://westus.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | | West US 2 | `https://westus2.voice.speech.microsoft.com/cognitiveservices/v1?deploymentId={deploymentId}` | + ## Next steps > [!div class="nextstepaction"]
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
The prebuilt neural voice provides more natural sounding speech output, and thus
1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure and listen to the neural voice [samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) at the bottom of that page to determine the right voice for your business needs. 2. To make the change, [follow the sample code](speech-synthesis-markup.md#choose-a-voice-for-text-to-speech) to update the voice name in your speech synthesis request to the supported neural voice names in chosen languages. Please use neural voices for your speech synthesis request, on cloud or on prem. For on-prem container, please use the [neural voice containers](../containers/container-image-tags.md) and follow the [instructions](speech-container-howto.md).
-## Standard voice details (retired)
+## Standard voice details (deprecated)
Read the following sections for details on standard voice.
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
zone_pivot_groups: programming-languages-speech-services-nomore-variant
-# Language identification
+# Language identification (preview)
-Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md#language-identification).
+Language identification is used to identify languages spoken in audio when compared against a list of [supported languages](language-support.md#language-identification).
Language identification (LID) use cases include: * [Standalone language identification](#standalone-language-identification) when you only need to identify the language in an audio source.
-* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
-* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
+* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text.
+* [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
-Note that for speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
+Note that for speech recognition, the initial latency is higher with language identification. You should only include this optional feature as needed.
## Configuration options
-Whether you use language identification [on its own](#standalone-language-identification), with [speech-to-text](#speech-to-text), or with [speech translation](#speech-translation), there are some common concepts and configuration options.
+Whether you use language identification [on its own](#standalone-language-identification), with [speech-to-text](#speech-to-text), or with [speech translation](#speech-translation), there are some common concepts and configuration options.
- Define a list of [candidate languages](#candidate-languages) that you expect in the audio. - Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification. - Prioritize [low latency or high accuracy](#accuracy-and-latency-prioritization) of results.
-Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
+Then you make a [recognize once or continuous recognition](#recognize-once-or-continuous) request to the Speech service.
Code snippets are included with the concepts described next. Complete samples for each use case are provided further below. ### Candidate languages
-You provide candidate languages, at least one of which is expected be in the audio. You can include up to 4 languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification).
+You provide candidate languages, at least one of which is expected be in the audio. You can include up to 4 languages for [at-start LID](#at-start-and-continuous-language-identification) or up to 10 languages for [continuous LID](#at-start-and-continuous-language-identification).
-You must provide the full 4-letter locale, but language identification only uses one locale per base language. Do not include multiple locales (e.g., "en-US" and "en-GB") for the same language.
+You must provide the full 4-letter locale, but language identification only uses one locale per base language. Do not include multiple locales (e.g., "en-US" and "en-GB") for the same language.
::: zone pivot="programming-language-csharp"+ ```csharp var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" }); ```+ ::: zone-end ::: zone pivot="programming-language-cpp"+ ```cpp auto autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig::FromLanguages({ "en-US", "de-DE", "zh-CN" }); ```+ ::: zone-end ::: zone pivot="programming-language-python"+ ```python auto_detect_source_language_config = \ speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"]) ```+ ::: zone-end ::: zone pivot="programming-language-java"+ ```java AutoDetectSourceLanguageConfig autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.fromLanguages(Arrays.asList("en-US", "de-DE", "zh-CN")); ```+ ::: zone-end ::: zone pivot="programming-language-javascript"+ ```javascript var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fromLanguages([("en-US", "de-DE", "zh-CN"]); ```+ ::: zone-end ::: zone pivot="programming-language-objectivec"+ ```objective-c NSArray *languages = @[@"en-US", @"de-DE", @"zh-CN"]; SPXAutoDetectSourceLanguageConfiguration* autoDetectSourceLanguageConfig = \ [[SPXAutoDetectSourceLanguageConfiguration alloc]init:languages]; ```+ ::: zone-end
-For more information, see [supported languages](language-support.md#language-identification).
+For more information, see [supported languages](language-support.md#language-identification).
### At-start and Continuous language identification
-Speech supports both at-start and continuous language identification (LID).
+Speech supports both at-start and continuous language identification (LID).
> [!NOTE] > Continuous language identification is only supported with Speech SDKs in C#, C++, and Python.- - At-start LID identifies the language once within the first few seconds of audio. Use at-start LID if the language in the audio won't change.-- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID does not support changing languages within the same sentence. For example, if you are primarily speaking Spanish and insert some English words, it will not detect the language change per word.
+- Continuous LID can identify multiple languages for the duration of the audio. Use continuous LID if the language in the audio could change. Continuous LID does not support changing languages within the same sentence. For example, if you are primarily speaking Spanish and insert some English words, it will not detect the language change per word.
You implement at-start LID or continuous LID by calling methods for [recognize once or continuous](#recognize-once-or-continuous). Results also depend upon your [Accuracy and Latency prioritization](#accuracy-and-latency-prioritization). ### Accuracy and Latency prioritization
-You can choose to prioritize accuracy or latency with language identification.
+You can choose to prioritize accuracy or latency with language identification.
> [!NOTE] > Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, and Python.
+Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results.
-Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results.
-
-* **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
-* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. With continuous LID in `Accuracy` mode the results are returned within no set time frame for the duration of the audio. You set the priority for continuous LID with the `SpeechServiceConnection_ContinuousLanguageIdPriority` property.
+* **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
+* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. With continuous LID in `Accuracy` mode the results are returned within no set time frame for the duration of the audio. You set the priority for continuous LID with the `SpeechServiceConnection_ContinuousLanguageIdPriority` property.
> [!IMPORTANT] > With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy`with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect. Only [standalone language identification](#standalone-language-identification) supports continuous LID with `Accuracy` prioritization. -
-Speech uses at-start LID with `Latency` prioritization by default. You need to set a priority property for any other LID configuration.
+Speech uses at-start LID with `Latency` prioritization by default. You need to set a priority property for any other LID configuration.
::: zone pivot="programming-language-csharp" Here is an example of using continuous LID while still prioritizing latency.+ ```csharp speechConfig.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency"); ```+ ::: zone-end ::: zone pivot="programming-language-cpp" Here is an example of using continuous LID while still prioritizing latency.+ ```cpp speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency"); ```+ ::: zone-end ::: zone pivot="programming-language-python" Here is an example of using continuous LID while still prioritizing latency.+ ```python speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, value='Latency') ```+ ::: zone-end When prioritizing `Latency`, the Speech service returns one of the candidate languages provided even if those languages were not in the audio. For example, if `fr-FR` (French) and `en-US` (English) are provided as candidates, but German is spoken, either `fr-FR` or `en-US` would be returned. When prioritizing `Accuracy`, the Speech service will return the string `Unknown` as the detected language if none of the candidate languages are detected or if the language identification confidence is low.
When prioritizing `Latency`, the Speech service returns one of the candidate lan
> [!NOTE] > You may see cases where an empty string will be returned instead of `Unknown`, due to Speech service inconsistency. > While this note is present, applications should check for both the `Unknown` and empty string case and treat them identically.- ### Recognize once or continuous
-Language identification is completed with recognition objects and operations. You will make a request to the Speech service for recognition of audio.
+Language identification is completed with recognition objects and operations. You will make a request to the Speech service for recognition of audio.
> [!NOTE] > Don't confuse recognition with identification. Recognition can be used with or without language identification.- Let's map these concepts to the code. You will either call the recognize once method, or the start and stop continuous recognition methods. You choose from:+ - Recognize once with at-start LID - Continuous recognition with at start LID-- Continuous recognition with continuous LID
+- Continuous recognition with continuous LID
-The `SpeechServiceConnection_ContinuousLanguageIdPriority` property is always required for continuous LID. Without it the speech service defaults to at-start lid.
+The `SpeechServiceConnection_ContinuousLanguageIdPriority` property is always required for continuous LID. Without it the speech service defaults to at-start lid.
::: zone pivot="programming-language-csharp"+ ```csharp // Recognize once with At-start LID var result = await recognizer.RecognizeOnceAsync();
speechConfig.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageId
await recognizer.StartContinuousRecognitionAsync(); await recognizer.StopContinuousRecognitionAsync(); ```+ ::: zone-end ::: zone pivot="programming-language-cpp"+ ```cpp // Recognize once with At-start LID auto result = recognizer->RecognizeOnceAsync().get();
speechConfig->SetProperty(PropertyId::SpeechServiceConnection_ContinuousLanguage
recognizer->StartContinuousRecognitionAsync().get(); recognizer->StopContinuousRecognitionAsync().get(); ```+ ::: zone-end ::: zone pivot="programming-language-python"+ ```python # Recognize once with At-start LID result = recognizer.recognize_once()
speech_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnect
source_language_recognizer.start_continuous_recognition() source_language_recognizer.stop_continuous_recognition() ``` ## Standalone language identification
-You use standalone language identification when you only need to identify the language in an audio source.
+You use standalone language identification when you only need to identify the language in an audio source.
> [!NOTE] > Standalone source language identification is only supported with the Speech SDKs for C#, C++, and Python.- ::: zone pivot="programming-language-csharp" ### [Recognize once](#tab/once) :::code language="csharp" source="~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs" id="languageDetectionInAccuracyWithFile"::: - ### [Continuous recognition](#tab/continuous) :::code language="csharp" source="~/samples-cognitive-services-speech-sdk/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs" id="languageDetectionContinuousWithFile":::
See more examples of standalone language identification on [GitHub](https://gith
See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/standalone_language_detection_samples.cpp). - ::: zone-end ::: zone pivot="programming-language-python" ### [Recognize once](#tab/once) - :::code language="python" source="~/samples-cognitive-services-speech-sdk/samples/python/console/speech_language_detection_sample.py" id="SpeechLanguageDetectionWithFile"::: - ### [Continuous recognition](#tab/continuous) - :::code language="python" source="~/samples-cognitive-services-speech-sdk/samples/python/console/speech_language_detection_sample.py" id="SpeechContinuousLanguageDetectionWithFile"::: See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_language_detection_sample.py). - ::: zone-end - ## Speech-to-text You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md). > [!NOTE] > Speech-to-text recognition with at-start language identification is supported with Speech SDKs in C#, C++, Python, Java, JavaScript, and Objective-C. Speech-to-text recognition with continuous language identification is only supported with Speech SDKs in C#, C++, and Python.
->
> Currently for speech-to-text recognition with continuous language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.- ::: zone pivot="programming-language-csharp" ### [Recognize once](#tab/once)
See more examples of speech-to-text recognition with language identification on
::: zone pivot="programming-language-cpp" - ### [Recognize once](#tab/once) ```cpp
auto autoDetectSourceLanguageResult =
auto detectedLanguage = autoDetectSourceLanguageResult->Language; ``` - ### [Continuous recognition](#tab/continuous) :::code language="cpp" source="~/samples-cognitive-services-speech-sdk/samples/cpp/windows/console/samples/speech_recognition_samples.cpp" id="SpeechContinuousRecognitionAndLanguageIdWithMultiLingualFile"::: See more examples of speech-to-text recognition with language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/speech_recognition_samples.cpp). - ::: zone-end ::: zone pivot="programming-language-java"
See more examples of speech-to-text recognition with language identification on
::: zone pivot="programming-language-python" - ### [Recognize once](#tab/once) ```Python
auto_detect_source_language_result = speechsdk.AutoDetectSourceLanguageResult(re
detected_language = auto_detect_source_language_result.language ``` - ### [Continuous recognition](#tab/continuous) - ```python import azure.cognitiveservices.speech as speechsdk import time
SPXAutoDetectSourceLanguageResult *languageDetectionResult = [[SPXAutoDetectSour
NSString *detectedLanguage = [languageDetectionResult language]; ``` - ::: zone-end ::: zone pivot="programming-language-javascript"
speechRecognizer.recognizeOnceAsync((result: SpeechSDK.SpeechRecognitionResult)
::: zone-end - ### Using Speech-to-text custom models ::: zone pivot="programming-language-csharp"
var autoDetectSourceLanguageConfig = SpeechSDK.AutoDetectSourceLanguageConfig.fr
::: zone-end - ## Speech translation You use Speech translation when you need to identify the language in an audio source and then translate it to another language. For more information, see [Speech translation overview](speech-translation.md). > [!NOTE] > Speech translation with language identification is only supported with Speech SDKs in C#, C++, and Python.
->
> Currently for speech translation with language identification, you must create a SpeechConfig from the `wss://{region}.stt.speech.microsoft.com/speech/universal/v2` endpoint string, as shown in code examples. In a future SDK release you won't need to set it.- ::: zone pivot="programming-language-csharp" - ### [Recognize once](#tab/once) ```csharp
See more examples of speech translation with language identification on [GitHub]
::: zone pivot="programming-language-python" - ### [Recognize once](#tab/once) ```python
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
|--|--|--|--| | Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.0.0 | Generally available | | Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.0.0 | Generally available |
-| Text-to-speech | Converts text to natural-sounding speech with plain text input or Speech Synthesis Markup Language (SSML). | 1.15.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview | | Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.0.0 | Generally available |
The following table describes the minimum and recommended allocation of resource
|--||-| | Speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory | | Custom speech-to-text | 2 core, 2-GB memory | 4 core, 4-GB memory |
-| Text-to-speech | 1 core, 2-GB memory | 2 core, 3-GB memory |
| Speech language identification | 1 core, 1-GB memory | 1 core, 1-GB memory | | Neural text-to-speech | 6 core, 12-GB memory | 8 core, 16-GB memory |
Container images for Speech are available in the following container registry.
|--|| | Custom speech-to-text | `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-speech-to-text:latest` |
-# [Text-to-speech](#tab/tts)
-
-| Container | Repository |
-|--||
-| Text-to-speech | `mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest` |
- # [Neural text-to-speech](#tab/ntts) | Container | Repository |
docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/custom-spe
> [!NOTE] > The `locale` and `voice` for custom Speech containers is determined by the custom model ingested by the container.
-# [Text-to-speech](#tab/tts)
-
-#### Docker pull for the text-to-speech container
-
-Use the [docker pull](https://docs.docker.com/engine/reference/commandline/pull/) command to download a container image from Microsoft Container Registry:
-
-```Docker
-docker pull mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech:latest
-```
-
-> [!IMPORTANT]
-> The `latest` tag pulls the `en-US` locale and `ariarus` voice. For more locales, see [Text-to-speech locales](#text-to-speech-locales).
-
-#### Text-to-speech locales
-
-All tags, except for `latest`, are in the following format and are case sensitive:
-
-```
-<major>.<minor>.<patch>-<platform>-<locale>-<voice>-<prerelease>
-```
-
-The following tag is an example of the format:
-
-```
-1.8.0-amd64-en-us-ariarus
-```
-
-For all the supported locales and corresponding voices of the text-to-speech container, see [Text-to-speech image tags](../containers/container-image-tags.md#text-to-speech).
-
-> [!IMPORTANT]
-> When you construct a text-to-speech HTTP POST, the [SSML](speech-synthesis-markup.md) message requires a `voice` element with a `name` attribute. The value is the corresponding container locale and voice, which is also known as the [short name](how-to-migrate-to-prebuilt-neural-voice.md). For example, the `latest` tag would have a voice name of `en-US-AriaRUS`.
- # [Neural text-to-speech](#tab/ntts) #### Docker pull for the neural text-to-speech container
Checking available base model for en-us
Starting in v2.5.0 of the custom-speech-to-text container, you can get custom pronunciation results in the output. All you need to do is have your own custom pronunciation rules set up in your custom model and mount the model to a custom-speech-to-text container.
-# [Text-to-speech](#tab/tts)
-
-To run the standard text-to-speech container, execute the following `docker run` command:
-
-```bash
-docker run --rm -it -p 5000:5000 --memory 2g --cpus 1 \
-mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech \
-Eula=accept \
-Billing={ENDPOINT_URI} \
-ApiKey={API_KEY}
-```
-
-This command:
-
-* Runs a standard text-to-speech container from the container image.
-* Allocates 1 CPU core and 2 GB of memory.
-* Exposes TCP port 5000 and allocates a pseudo-TTY for the container.
-* Automatically removes the container after it exits. The container image is still available on the host computer.
- # [Neural text-to-speech](#tab/ntts) To run the neural text-to-speech container, execute the following `docker run` command:
Increasing the number of concurrent calls can affect reliability and latency. Fo
| Containers | SDK Host URL | Protocol | |--|--|--| | Standard speech-to-text and custom speech-to-text | `ws://localhost:5000` | WS |
-| Text-to-speech (including standard and neural), Speech language identification | `http://localhost:5000` | HTTP |
+| Neural Text-to-speech, Speech language identification | `http://localhost:5000` | HTTP |
For more information on using WSS and HTTPS protocols, see [Container security](../cognitive-services-container-support.md#azure-cognitive-services-container-security).
speech_config.set_service_property(
) ```
-### Text-to-speech (standard and neural)
+### Neural Text-to-Speech
[!INCLUDE [Query Text-to-speech container endpoint](includes/text-to-speech-container-query-endpoint.md)]
In this article, you learned concepts and workflow for how to download, install,
* Speech provides four Linux containers for Docker that have various capabilities: * Speech-to-text * Custom speech-to-text
- * Text-to-speech
- * Custom text-to-speech
* Neural text-to-speech * Speech language identification * Container images are downloaded from the container registry in Azure.
cognitive-services Translator How To Install Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/containers/translator-how-to-install-container.md
Application for Gated Services**](https://aka.ms/csgate-translator) to request a
Use the [docker run](https://docs.docker.com/engine/reference/commandline/run/) command to download a container image from Microsoft Container registry and run it. ```Docker
-docker run --rm -it -p 5000:80 --memory 12g --cpus 4 \
+docker run --rm -it -p 5000:5000 --memory 12g --cpus 4 \
-v /mnt/d/TranslatorContainer:/usr/local/models \ -e apikey={API_KEY} \ -e eula=accept \ -e billing={ENDPOINT_URI} \ -e Languages=en,fr,es,ar,ru \
-mcr.microsoft.com/azure-cognitive-services/translator/text-translation
+mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.018950002-amd64-preview
``` The above command:
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
Regular monthly upgrade
Release note for `2.15.0-amd64`: **Fixes**+ * Fix container start issue that may occur when customers run it in some RHEL environments. * Fix model download nil error issue in some cases when customers download customized models. + Release note for `2.14.0-amd64`: Regular monthly release
Release note for `2.5.0-amd64`:
-## Custom Text-to-speech
-
-The [Custom Text-to-speech][sp-ctts] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `custom-text-to-speech`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/custom-text-to-speech`. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/custom-text-to-speech/tags/list).
--
-# [Latest version](#tab/current)
-
-Release note for `1.15.0-amd64`:
-
-Regular monthly release
-
-| Image Tags | Notes | Digest |
-|-|:|:--|
-| `latest` | | `sha256:06eef68482a917a5c405b61146dc159cff6aef0bd8e13cfd8f669a79c6b1a071` |
-| `1.15.0-amd64` | | `sha256:06eef68482a917a5c405b61146dc159cff6aef0bd8e13cfd8f669a79c6b1a071` |
--
-# [Previous version](#tab/previous)
-
-Release note for `1.14.1-amd64`:
-
-Regular monthly release
-
-Release note for `1.13.0-amd64`:
-
-**Fixes**
-* Keep user's inputs case-sensitive.
-
-Release note for `1.12.0-amd64`:
-
-Regular monthly release
-
-Release note for `1.11.0-amd64`:
-
-**Features**
-* More error details for issues when fetching custom models by ID.
-
-Release note for `1.9.0-amd64`:
-
-Regular monthly release
-
-Release note for `1.8.0-amd64`:
-
-**Features**
-* Fully migrated to .NET 3.1
-
-Release note for `1.7.0-amd64`:
-
-**Features**
-* Partially migrated to .NET 3.1
-
-| Image Tags | Notes |
-|-|:--|
-| `1.13.0-amd64` | |
-| `1.12.0-amd64` | |
-| `1.11.0-amd64` | |
-| `1.9.0-amd64` | |
-| `1.8.0-amd64` | |
-| `1.7.0-amd64` | 1st GA version |
--- ## Speech-to-text The [Speech-to-text][sp-stt] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `speech-to-text`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/speech-to-text`. You can find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/speech-to-text/tags/list).
Regular monthly upgrade
Release note for `2.15.0-amd64-<locale>`: **Fixes**
-* Fix container start issue that may occur when customer run it in some RHEL environments.
+* Fix container start issue that may occur when customer runs it in some RHEL environments.
Release note for `2.14.0-amd64-<locale>`:
This container has the following locales available.
-## Text-to-speech
-
-The [Text-to-speech][sp-tts] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `text-to-speech`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/text-to-speech`.
-
-This container image has the following tags available. You can also find a full list of [tags on the MCR](https://mcr.microsoft.com/v2/azure-cognitive-services/speechservices/text-to-speech/tags/list).
--
-# [Latest version](#tab/current)
-
-Release note for `1.15.0-amd64-<locale-and-voice>`:
-
-**Feature**
-* Upgrade to latest models.
-
-| Locales for v1.15.0 | Notes | Digest |
-||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:61a154451bfef9766235f85fc7ca3698151244b04bf32cfc5a47a04b9c08f8e4` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:13cf045d959ce9362adfad114d8997e628f5e0d08e6e86a86e733967372e5e2d` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:19f8c32f6723470c14c4b1731ff256853ee5c441a95a89faff767c2c4e4447a9` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:16835388036906af8b35238f05b7f17308b8fae92bf4c89199dcc0b35bb289d6` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:06af13ede8234c14f8a48b956017cd7858a1c0d984042a9a60309ae9f8f6a25b` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:1c6375ee05948ec9a9b2554e2423e2c2d68e7595f58d401bd2f9fc25bd512bde` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:27e88c817ab91b2a4dbb5df1f88828708445993c1d657d974b6253f1820e280f` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:6b8ce6192783c1b158410a43a8fd9517cfe63c8b4a3cd0f1118acd891e7ebea5` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:28e1a6f0860165a4f3750b059334117240e0613ddf44d1e3c41615093bd3e226` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:28e1a6f0860165a4f3750b059334117240e0613ddf44d1e3c41615093bd3e226` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:3730dfefb60f3a74df523e790738595b29e3dc694a16506a6deccffec264aa2a` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:dfce427d7c08bd26d38513fd4b5c85662fe4feeddefa75e1245c37bb5b245b45` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:71a9a64adc48044e2ce81119bc118056a906db284311fc3761b3cdfe21c6ad18` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:a42624ebf51afff052a0ed8518f474855d70b4a9245cd8e81492b449e6b765d1` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:b5f745bbf9de83f57ac4e6e2760049e10a8eaae362018c4d5a4ace02a50710dc` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:6638a92b495c76ca16331c652b123fa52163242cfbd8f8298c9118a0f1261719` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:748c7042dfa3107f387c34ee29269fc2bd96f27af525f2dc7b50275dae106bd1` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:8a439470579f95645bf5831ee5f0643b872d6bdbd7426cf57264bb5a13c12624` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:ad9c5741a2b19fc936ec740fa0bbd2700e09e361d7ce9df0bb5fb204f6c31ec5` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:45d1d07f67c81b11f7b239f0e46bd229694d0e795b01e72e583b2aecf671af3e` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:f4cd71fac26b0d1f0693ce91535a0fd14ac90e323c6f9d8239f3eb7a196ff454` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:5a228190a5fe62aaa5f8443ab4041d2a7af381e30236333a44c364e990eeaba4` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:3f477ad93ff643f90adf268775c9b8cd8fb3b2cadf347b3663317184c4e462c6` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:d4ece3a336171cd46068831b3203460c86e5cd7f053b56a8a7017a0547580030` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:d4ece3a336171cd46068831b3203460c86e5cd7f053b56a8a7017a0547580030` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:f668eb749ee51c01bcadf0df8e1a0b6fc000fb64a93bd12458bcff4e817bd4cf` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:50900ece25a078bc4e0a0fec845cc9516e975a7b90621e4fdea135c16b593752` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:772bdc81780a05f7400a88b1cddcef6ef0be153ce873df23918986f72920aa41` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:ab25fc60c8a8e095fcf63fe953bd2acf1f0569e6aafb02e90da916f7ef1905ce` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:11c144693d62b28e1444378638295e801c07888fd6ff70903bdbb775a8cd4c7a` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:56db18adc44ee4412fd64f2d9303960525627ecf9b6cd6c201d5260af5340378` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:80ad68c2ca58380ca3d88e509ad32a21f70ecc41fab701f629e2de509162bf61` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:fd51cdcc46ac5c81949d7ff3ceeacf7144fb6e516089fff645b64b9159269488` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:0ba17a99d35d4963110316d6bb7742082d0362f23490790bb8a8142f459ed143` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:67304f764165b34c051104d8ef51202dcbaafcf3b88d5568ac41b54ecf820563` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:9b428ec672b60e8e6f9642cc5f23741e84df5e68477bb5fd4fdee4222e401d47` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:d3fedebf0321f9135335be369fec84be42a3653977f0834c6b5fda3fefeab81e` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:2d33762773d299ffd37a3103b3c32ce8d1b7f3f107daf6514be4006cfbc8fd47` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:54f762a2d68cc8a33049b18085fac44f5bad1750a1d85347d5174550fe2c2798` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:7d3e4a75495be2c503f55596d39a5bdfe75538b453a5fb7edb7d17e0c036f3f0` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:729bd1c6128ee059e89d04e2e2fd5cd925e59550014b901bf5ac0b7cd44e9fa4` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:9ed035183c7c2a0debe44dc6bae67d097334b0be8f5bec643b7e320c534b7cb2` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:f043d625788fd61bba7454a64502572a2e4fed310775c371c71db3c0fcf6aa01` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:f043d625788fd61bba7454a64502572a2e4fed310775c371c71db3c0fcf6aa01` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:a320245b93af76b125386f4566383ec6e13a21c951a8468d1f0f87e800b79bb6` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:94d86ae188bb08df0192de4221404132d631cae6aa6d4fc4bfc0ffcce8f68d89` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:8fee6f6d8552fae0ce050765ea5c842497a699f5feb700f705c506dab3bac4a6` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:1d99f0f538e0d61b527fbc77f9281e0f932bac7e6ba513b13ecfc734bd95f44d` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:99db33a668e298c58be1c50b9d4b84aeb0949f0334187b02167cfa3044997993` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:50d1e986d318692917968654008466fc3cca4911c3bcd36af67f37e91de18fe2` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:7736a87dcf3595056bb558c6cb38094d1732bb164406a99d87c0ac09c8eee271` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:6ce704a51150e0ee092f2197ba7cf4bcbf8473e5cd56a9a0839ad81d87b2dfe2` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:ec5d75470dbae50cb5bc2f93ed642e40446b099cb2302499b3a83b3a27358bd0` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:e572b62f0b4153382318266dcd59d6e92daf8acc6f323e461d517d34f9be45dd` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:691ef2ead95a0d4703cd6064bac9355e86a361fcffe5ad36a78e9f1e1c78739c` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:f52a717d4d8b7db39b18c9a9e448e2e6d6e19600093518002a6fc03f0b2a57c9` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:1927ff28b40b7c37ee1b8d5f4efb2fd7d905affd35c27983940c7e5795763c70` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:ebce3b7b51fb28fce4c446fbbf3607f4307b1cec3f9fa7abdd046839a259e91d` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:195e719735768fdf6ea2f1fc829a40cae5af4d35b62e52d1c798e680f915dd12` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:f0ea6ec57615a55b13f491e6f96b3cc0e29092f63a981fd29771bcfa2b26c0e1` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:deee319f2b6d8145f3ed567cfcdfa2ca718cd1b408f8d9fbf15f90d02d5b6b35` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:d0005c1363e197c0f85180a07d650655b473117de12170a631f3049d99f86581` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:53731218ed6e2bed2227c25a2a2e1d528a19dbc078e2af55aa959d191df50487` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:81b2a56f72460a780466337136729b011ef1eac4689b1ec9edbbd980b53ba6c3` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:e3d44c7ac30b1b9b186eaf1761ccadd89b17fcb4d4f63e1dab246a80093967f3` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:8ecb2b3d0c60f4c88522090d24e55d84a6132b751d71b41a3d1ebbae78fc3b2b` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:5b61e4ebe696e7cee23403ec4aed299cbf4874c0eeb5a163a82ba0ba752b78a8` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:adf3c421feb6385ba3acb241750d909a42f41d09b5ebbc66dbb50dac84ef5638` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:e9fc71faf37ca890a82e29bec29b6cfd94299e2d78aaed8c98bc09add2522e2d` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:b02cc2b23a7d1ec2f3f2d3917a51316fb009597d5d9606b5f129968c35c365f6` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:961773f7f544cc0643590f4ed44d40f12e3fa23e44834afd199e261651b702ae` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:f1fdda1c758a4361d2fb594f02d47be7cf88571e5a51fb845b1b00bf0b89d20e` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:183125591097ab157bf57088fae3a8ab0af4472cabd3d1c7bdaba51748e73342` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:72a77502eb91ebf407bfbfb068b442e1c281da33814e042b026973af2d8d42e0` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:9a202b3172def1a35553d7adf5298af71b44dde10ee261752b057b3dcc39ddea` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:9bbba04f272231084b9c87d668e5a71ab7f61d464eeaab50d44a3f2121874524` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:048d335ea90493fde6ccce8715925e472fddb405c3208bba5ac751bfdf85b254` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:048d335ea90493fde6ccce8715925e472fddb405c3208bba5ac751bfdf85b254` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:fe30bb665c416d0a6cc3547425e1736802d7527eebdd919ee4ed66989ebc368b` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:6308d4e4302d02bbb4043ec6cceb6e574b7e156a5d774bef095be6c34af7194c` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:e40dda8b5e9313a5962c260c1e9eb410b19e60fa74062ad0691455dc8442a4d9` |
--
-# [Previous version](#tab/previous)
-
-Release note for `1.14.1-amd64-<locale-and-voice>`:
-
-**Feature**
-* Upgrade to latest models.
-
-Release note for `1.13.0-amd64-<locale-and-voice>`:
-
-**Feature**
-* Upgrade to latest models.
-
-Release note for `1.12.0-amd64-<locale-and-voice>`:
-
-**Feature**
-* Upgrade to latest models.
-
-Release note for `1.11.0-amd64-<locale-and-voice>`:
-
-**Feature**
-* More error details for issues when fetching custom models by ID.
-
-Release note for `1.9.0-amd64-<locale-and-voice>`:
-
-* Regular monthly release
-
-Release note for `1.8.0-amd64-<locale-and-voice>`:
-
-**Feature**
-
-* Fully migrated to .NET 3.1
-
-Release note for `1.7.0-amd64-<locale-and-voice>`:
-
-**Feature**
-
-* Upgraded components to .NET 3.1
-
-| Image Tags | Notes |
-||:--|
-| `1.13.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.13.0-amd64-en-us-ariarus`. |
-| `1.12.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.12.0-amd64-en-us-ariarus`. |
-| `1.11.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.11.0-amd64-en-us-ariarus`. |
-| `1.9.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.9.0-amd64-en-us-ariarus`. |
-| `1.8.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `1.8.0-amd64-en-us-ariarus`. |
-| `1.7.0-amd64-<locale-and-voice>` | 1st GA version. Replace `<locale>` with one of the available locales, listed below. For example `1.7.0-amd64-en-us-ariarus`. |
-
-| Locales for v1.13.0 | Notes | Digest |
-||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:8ff6360ba584d81b987582ce1c2cb6bb624cf68e4d71544805b9afc0401542dd` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:da5037de95c00362cb1871374735778c3eb68640ae4cb6a260659e7e0a67c37e` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:871140e57c126ac79c92c69572b86587150d1f14447c91152de3d4b10b3ef9f6` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:7291ca9c579b1967cca941ce11321daa06ed6a9a1f0922d425d39f70a4aa8acd` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:c8f34c3a7fc5af5141da5439b520614e039d133b6180e8157f12ec7279e9163a` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:694eb294595700266355f8d57530ec3cccd4e04aa74dd630b96558bf2b481e71` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:f875435d8fadb56df2123d5aa1ceca34990d00f4c75678eb2526b83058972717` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:c58359bd6e6676e23dda181a86caee1771366b0329a44fae0f363bbd381058ad` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:c8e615d40c6e96216b90e329bf7185060de646db1e92fd1fdcd344a52bd86b55` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:c8e615d40c6e96216b90e329bf7185060de646db1e92fd1fdcd344a52bd86b55` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:e8e3f04f0ee74d4247ffb7c69e54559f0cc6db66a121406e06ceb9dcdc3c4379` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:15112a55bc7ccb6c29ee0a1de464fa6352a0e9953399032e5c8a0d29ec064af0` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:9a77bb5451889f62b8a146bfcc4a412c1cef95fd2102650528ccee84a08b25b8` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:90ee1094fbb8e739788545b3b9f4fabad5b4dffb5b7087cfd01c3b21ba1b2473` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:43b7d3c87162129253fd5c150307a5d9dc6ea28b8fa19776b66f4aa7a546f43b` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:75a4423d5b24136efdc5de28a7a5b50a3a09b65b3824f86dd50a95eefea7ead6` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:87e926f7db4a27870c735c80ad801bc5480fb2665594727ae760c8c287677088` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:3fbd6a824831f158762036aa41c0397f7c1148150a4dc045db5f19ba840e74b6` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:646810c4129f8919ff56d91701b488e229bd12b3dd9c89a1635868f9340e00b8` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:641abfa96380f142d4b2f9145cd02886d44f01bce68614094b48c1e01b50ed59` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:c0acfffceae9c1ff5ad305d8b98929d9c65eca25f49ddcb8999d7de6118392d2` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:fbdc9ef0b4308ffce87d6ff6854814804b3cafacad6c4dc5cdac6a47c6de7975` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:f31c40c9db2f1e826686649e748d0b2be0c00abcac62c2aae5b8981b0d8c681d` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:1232b798aae3ce68d1e555a5b35142bde5b4c871488f8c82c3d7c0767925afd8` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:1232b798aae3ce68d1e555a5b35142bde5b4c871488f8c82c3d7c0767925afd8` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:5fd7e9fbcc84ab467d04e95b18f5411579ce2d9a153b7f6e396f2412d08898dc` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:5fbbd16ab58b7f2440778b258bb0cd966286de0dbb3ce7f5e54d0f244f63dd3f` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:806b92916b2fe1e7855023a009742033a48cb7eddde84ddf7c93be93b9621026` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:507d9f40dcb846a5d1511a5e9e1cf94b360b1d9922f4b1143c3146d1b3bc69a2` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:594add691d03d02fa5925f817e6a25c091fac1a924e0ea4b626e0fce858a78cb` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:09d288b58fea080689471618227d1cb3ccc467f2edc9477eaaffffb09b3d6d8b` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:7019c80c88444a60bf1016eb66284745dc8184b051685df4a1b3c40d32c8ad7f` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:eed46588733b884c330fff1ff7f4e3e3fd6416cb340ebd80e44c4b3d1e085e55` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:00f7a854c4a01bdbef88e0b138c97f732f1c6008a8b2c1722fc8da3a91fa79a4` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:5f32e838a0925c560d2961a42487b99dd7e79e04661a7711f905d36c55973fd6` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:6f3d3237c990f8f04d4c8f488746f74fa94edd2c5f1def758af90b2be251900e` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:282e2e48c1147b74d927e801534be52b1301a081ff881994e85bb9d85b6e85fb` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:16370c22530c93fc6c5ebeaf10663de7c3d45db58eccc716abd5274b5bee56d3` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:e6541e82b8555f748f1feb5eef1c0ebf884245c5448f0ced46e6f25dabb925a2` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:a4cf0bab208a31da3e796bf353969dfd98184b30e0cf713df49cb4fb07ff568b` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:4417d0a14098b564eb4ba91772eb7ad5976ac52b0b59ae484fc3a88017e0776b` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:da086a3e2bc3e17f4e44165055fc61679e9356688d3735ee8cfd81e6265b8622` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:0c9915bf34e3045e39aa245c597aa7223fbf6100d7e20cbcc1bf131f89ee785e` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:0c9915bf34e3045e39aa245c597aa7223fbf6100d7e20cbcc1bf131f89ee785e` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:fc08c968efe882ed11ad0ee0755a9d43eff88b96da8ec19e7a5c071810c84d8c` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:b6ad73f07efd1576e166b4d7e54a4ff419bfedc513a175fbb968389eb289a4ee` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:3aad5ccf0c155593934c29a3e50502bc80b0370fa29626e67cda141d4bf5ac89` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:01502f274bad378e6e99bed5f80fdb476880ce04e8775ca56d338de2f2d43e8c` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:fdc20724194612d99e8339d25c72c7fe937ad741abe46d86def6c62880913c2a` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:abf0e442ec972e25743a8af55da49a6fd5bf2ffd6ca09619d68e4dc9f9db779a` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:9eff152cd4bea6f9de3b101c0704f37c8a061e060287e3f9f8fc2eb28d7dcec7` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:83aa3c569f7598843d4957f075915ac2635d3aaf577ac1158c12a1238dd7e148` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:ea404c7857f9df0a23cbf3fac12ae00f11c32a6822d91078a321302f09f01082` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:d4c15f7da8e03650395489b6cb6975d59322b1bbd2c59957617f0c0a297409ee` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:cb2c0fb57513c66e00bd6b8cbb44882d5bb7d483c19784d2b1e09511d58842bc` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:7b9a92ab8a9856f422e65b428b845571a059c0923dc1c348134f271ed7a4abe0` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:cface74973368a78d75a2a079214aa748574c5f037b0c4189888269b6016f230` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:cc3e74228002b8d4e7dc487ff6f930316ac5d7a93f97937942a23f41b484ba8c` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:dca613867e2f559d9485f9ba553ecea3de6d4b2779d4eed0ce1e53e7f7939773` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:791ac2b3100725f909cfeceb17fc0d5fd1022242db45ba455d7ea088d76ac033` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:3b93df188bcbdf9416d203a7e30ade8908728316666cd3451a5f0320cdf219a9` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:d2f636e35e67be196a4ad79f168e4df74d2f00d5b5c6123bd61f9aec72bfd1a7` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:247a4c6025faced1be1738d816c1bb74b23bbc5d49458f9afe95dc32ab3ea71c` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:355c3a0f64f003d0a041a757b8ddcdea8130b6a56a7c4003a68ba0412400c446` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:55fff1cde012a7791c756104ba68a360e609a765bd776024a9f5f00199f568e5` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:7f80965dde85e3a5aae9f69561c296d073289f0b6aa37e95ff0aa5192a5b7f90` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:1bd43f513a5b2752c44a107e1898459cdda5d7267ec21f379679d411700e5189` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:8062e2479a6a3dc17b8342c07a94a39dd1e1f788c1def0a1ab55a885b491bbab` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:6ce345df654bd1db213c16c866b608037dcefb1d056fc14727db3b9e21437762` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:9b9c8ad7f8621f887f3e9fda26f43995855dba76831fdf2598ef383cf3d20f39` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:2e45f019df702d8788c1d9c20ff75cfd94aecaaf6facb9f41b642ef1bfe7d318` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:3b142a414ff9f30ebef144e22bf979589600f226442d2f882384695795739178` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:23b76501492c9b60e8888eda2f6b0258859f68ed6ff7fb49bacbb18cd5f542ed` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:e9acc58168f6800d9dd11cbc569c9d279ecf28f3d17c702528d25f67edd447c9` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:85e7d7ae77d41195de5102b772621ef34564d40fad224a0ed21a8fe8daf98b0f` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:1fcba05138c0e5bf36447530311800e2d4044824b5d893439a12f3ebc6380135` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:d02bd8759e085abbc95725aa4f70f124c4505aa0856a17696a1555b2cf64512e` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:d02bd8759e085abbc95725aa4f70f124c4505aa0856a17696a1555b2cf64512e` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:a3f68538088b5b07f4dc27239fa3a6308d949c2643638634c74f3ee132bca911` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:bb0696685f3a90fe6898ff1487cb0c5957e02f3c63cdb7d02394b5c061339bf3` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:1772b3bc8b166f429356b00d07ca438202c75d578b6d1655351b9c1e06ae1424` |
-
-| Locales for v1.12.0 | Notes | Digest |
-||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:987e6b3e9e13570eb29117e87829a4905b35c712a0f36429dd6404793af31627` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:7d1d3c337b7e3bdc6ae2b3e074828ffc3c64ffc0ab94abcb89896e623148d963` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:cf01bea4f1f6b7112871da84fd82fb7e6de106c11cc933f21131385173f1da09` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:d6060a1e16cbe40990677b3c46f14dc619ee6887d39a4af1cac51fba2baca9a9` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:5033185bd60257033989fc4ff124c69b1dd02d5b99b79ff5c52ae84572095693` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:ac9655166f8181db2d0e6684cc3a5b6e089da788f17c78067af2cf061c8db660` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:90d222aa43c3efac04b9bc3e746b9ebea446cc16c3bdbb471b81065edfbc3023` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:0c08c10f559c97eda9a0a3f8527f8b05810a53e8a3fd2b8e9f2ab35f587d6c46` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:bf54713a1691f2378cf701a1f68ed0f4d32adeab25b2cbd9493f753d56d13e39` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:bf54713a1691f2378cf701a1f68ed0f4d32adeab25b2cbd9493f753d56d13e39` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:b94c79ace4b33bad944f88259da4dab5f52da7e78af85a8b6eee0e99ed05a387` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:3b331be0a6eb32b12d5c6244691bd51ee1d6b218bd3dc066c0f9cb5b78864e14` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:1bbbd1214119d2e02539f7bef8eeba48e86f17b968f2532a7d96e96ef40ecbe3` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:aa0a38fd20cabcf33baa97b3a88f354d01055f57ed9376bf98b7ea0993333ffa` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:57966c65522862572e07ba474fba7e2c6038091cc1b8a35861645dffc2fc5f5b` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:57c6ff08057f199a8eb75668f8ddce26b92c87a7e01e9003b74339b98ea438c4` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:89a8b8b8e900e6dbda665d245fd8a911d6e3286ee16a92e46f1993dc3667b631` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:18347ce1c4e4e21180f64c27bb4bcbebbf52597e25db7e24dbeb57edcea56109` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:015905bd42f8fb4ec575d971ff2d710ac5f904da2b84909270d3a7e51f5e3029` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:4a490dcc6be935178761f14edbdd0c6e4036626046dbfeda002336d871c36fdc` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:f26fb9b32ca82aa00c40f8824ed5d3d95ba1be5a10343e8649946d9468f9f74f` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:43f5fffad77d3446bc08922df36e244115ecf6090e7c48c42281c2fa62d23b90` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:0ca4a07585a61a6e15c7fd951b77bab6b5cf8934ecff65fe4ca6cfe8e47f351b` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:00857cb570528dee93f7c9c7f96bb2e11763ff6aa9cc7405a05bcbad3d85b08d` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:00857cb570528dee93f7c9c7f96bb2e11763ff6aa9cc7405a05bcbad3d85b08d` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:3d7c911788bda58225a7100ba1a9afbb61e0a9f8b7633b383fe6e9faa48471d0` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:251841a8399bd168644460e3ebf6d92f093dc8ea60f9defdc663a7e1f60515fa` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:dbc6bb44b283902755907d9cee5694f880c95c6cf939f328059d826fefe53dfa` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:9f11111e24b554d907d36516d130324d64a477b512cbd7faffa0b7d3895aa538` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:04add8f669539cb2522237a1b01d263b30ed609332cd2ff6dcf2c88fcd24764a` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:d375f7eea3592e041943a56ba18bec9ebc4bba1c99dea4d583f2012aee31cff7` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:437e38d9cb97d2cee27890529eccc1d0b96622749c83844b89c50dc119176b61` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:b6c0937fddd2e4d39a7cd96628a3d7d6004936f356cb553942e4f7dd48824b52` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:5a359ab047d811996cccb9f3f95a59a7e023ee5be72ff0f509e7ebfeb0d3a07a` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:439bab9f2933c73e52e78f1683a027e81a251c32fb8aa49b6cd8e7c9b2451f15` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:ca798c5d25454b60cafca44f7f7e32896146966a8de94d00cced06235e38bf00` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:e696a65a7c40209a8dd8d9ff59ca5334811e993f5b454f6d741ce0fc59258e07` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:ab6e7c023ee6cef95f8dc4eeb3c804ea1b8af937cadb17efcc12e5b18adcfc69` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:cb8f51f75a0b93baf6efb1624d7d01cd736926769922d61a63773eb3a1097399` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:482aa2eb44f41294780cf299e6105a1a3105a2d8065233b34ef1837879f95b7f` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:2f24ef0e620eeec3ea14262302d22cbb539a8afa85d356ffa446ca9cfd723b31` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:0338f8e24eddb819c45909ec3a92c430b1d5ec1567a71495cc19c9a74382b224` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:5d7e10ab0fd18d1d163c31341765b6f65bb198048424aa622b854172e845726d` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:5d7e10ab0fd18d1d163c31341765b6f65bb198048424aa622b854172e845726d` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:08d606969abd0165a798a8e0061e6439d4a33ad6af71aa58a1228e98018e7da9` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:9613dbf91878054e2ab79d5d9c8f3686d5fe80b16c40d38db9aec3a2c3816555` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:037ca355d8dcf9bff5fda9b9a4a9c2a54a03f3a48c378693c11437a36a245836` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:647b92d1591501ed032d67cf2cfd719e95c24ffb624143d301c2b6dc5eed7397` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:c35e40ffe1352870b9f177dcf70c1cd9eec9f22f92d35080fb5baa1fa65eac8d` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:4fa1436d83439cc9672fe82e35f57a366d2c1a6eb1df1f9f9175d3a588b09610` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:82f13a16e7812857143d311b5443cecfd7c199a88235728f437ba03e7cd92342` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:565bfa8bab3a11608fd5fecae1a0cd655b4508404c354d5574af0e88ff1aec76` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:2b9ab2e9d946e152b46a634ae291fedd220c76a7ba133346e80b4b19bcaa1422` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:3a05e09241b43c149132b42079f486f0a076d493d4e4c7e4a56b8a030c5b55c7` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:bb018c3c7d65c825c1755c510aca7f73f058ac4dce236dc114131c5699a1cb61` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:eb2f7dc4db0981717b5fdd16c290ecb8135bd5ae409e0b569e3de34a9fb9f071` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:098fabd9284caabafd4af526d52d5fa70ccbd0dc0e0c658753d7c644ab3bf813` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:c7c033ef39c3da6c82ed1870e6796f501654403605268bcc8136cedd37c5ad1f` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:2da1e4c972b47efd82a28b4a8324637d878b100bc730f90e9c9d16a6ccec75e9` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:2f0ba437a2f7fbce9923a4da986aec53ec0ad3d52858e6aa12a7464cfa190240` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:847e60ea915697dd038319a071757e095229ca0001bf05f1d922d4c52ff4b22a` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:37914d4ed1a12d3999385592d5dc0c0ed11148f71f09e11a1bb4c9394691e3b7` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:3a800cee6d1520a1c0502d9b682a7e0f98ef01de58bb39ea31573a9711ef1271` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:70ad01c5cf6da459e0938c1da17348624e38d94b3ce4f22e181b9516262e961c` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:8920e7acd70d6d550b66eb3c23878d070dc98219bd59fa8fce1abaf622da4c2f` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:c17313ddab7e7d9c2777d4a19df65b34da4e30e52b4a21f81e5c59bacdfce979` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:91315b4a62bbf69e117cdb4ef88facb02d3ee3d436a1e313af94ba6cb0b8608d` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:d04761de48003062617397de4c4c5f448cd9b4bf57262587d245277d4e408431` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:e41002cf7f56d948d2737adc23c0750b430d553d78abb2ac53c42427de971299` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:5f556a0c113750d8780c09be8af7db28bc29784056d22389aec61c256ab9cbcb` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:c893b27edd98c0760b7e510c365018e333aa0976ef742f7714ad59c92950a8e2` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:bc34adc094183bbbc461e0350d7aa8e5140ece5e89cd9e77c60f2c96276037b2` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:20b23d368f83d4b2926b6d8529d23c4dd84727bb063593d549fb959ce3ace8d2` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:cb638e72c8966204ab9142810b94cf4c2da54f3fd5917ae0e12a11d28a4253bb` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:041a22b054b0acf494ff3085cdb2cd2eb4faeb7b692027f1723d27c341a8ee33` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:7d9d2766713507b04c0bf3332367e867524ff392b693f4eb8a8c003a4dfc3bac` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:b6dfbdbc5ef0d91812d96c88393c0ae4835eea42dbba4c3d36ab9c5e806bb772` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:b6dfbdbc5ef0d91812d96c88393c0ae4835eea42dbba4c3d36ab9c5e806bb772` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:9802fc4a9656063cb9f215ca757db5289960d323244272ce280db0395ddd46ac` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:05f50dffbeb17e4215a5a53cc0791d825b63bc1e2b007b00797e5d0e1b1d6d1e` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:e96f4aecba6e3c0741218f3e1aec35e53147b12543be9fdcd76ff98d4c34cf84` |
-
-| Locales for v1.11.0 | Notes | Digest |
-||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:7ba558f444ea482eca87b3e850e9b416c71391282b26a590d1ee3d9a81350188` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:7f0afcc205340dea7ffd959812dcba6a11448f6c5c1ab55c1422a360bd876137` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:fde80af0e2e8e49b49ddec5f1502a246cf308328738d6f572f0043e625673782` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:fb2b50b128aa84ad0cd05db2462337d316ff2d2d78f393c5a9dece588a80654e` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:9dde22e5e2164bee77aaf9fe4e8fc141d9dfbe3c92c4b07da969d34aa14f7fd0` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:4a756cd10ad21dcc2b1c7006ec961f7e267f6d2204d9ad4efd6d4730d67a4ccc` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:9d531c162c4279830f99ef0d44a506a023a0137723aab3adff7a663043a1c576` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:353d07168b4a44fcc12a0239f5bf20e2d29365b9abe26b9b844fb6194e7c9bcc` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:d76ff817fc154ba0f5ce1abb93c5a0269fe5bf7b4feb3b3fe9fe8ffe6fd4fee4` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:d76ff817fc154ba0f5ce1abb93c5a0269fe5bf7b4feb3b3fe9fe8ffe6fd4fee4` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:8e22964dc4b77c05f602f72b0e706a534a89a271c4d17b5117af122c34df9a18` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:fcd6288d5fd4ddfe3d3e65e860895f6f7a7e81216c7113f71e7b1b01eb501150` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:e49a5ec17b696a3a73d10383d369a2ff88ccddb812898a2eedefe6e6a009ce5a` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:b7fb06bd992982c7e2e71da217898da45b742aab08e901bfcef9c43acf546bc0` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:efd7d85845ca597937b8cbea7724cf31797855e0de5f30d66984ab9bac688152` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:8211077d55b440dbb26e42db6322b35ef6ec88e8c2ec6647831e0046668ed8a4` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:f6e924720b71d8f9a1edd4f5f2280e9054263eb79ce5364e03c9b802ad92f2dd` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:de702f70c53e4c1647e5fdd3432d37dc8972e069fcc103a1fc2b0be70f0d6d71` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:5077cb575ffeb64e3d70184a68259438821891f6c9865350d2f887ea43ee99c1` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:c6f734cc12f04697a4d9b2003c46c5a4efd8c68da90838debb5628d9f8e70104` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:f5a78e857bc1563cbcd74f7b856bc2e4bd981675b397aeccfa134137f1cd3392` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:667729cafd6bf5afe071a0a2989f836943e3bb6d3d1ebe35b7fab9bb311bfebc` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:e46533f972235f297dd31fd338638f5117e3f04fa4a434d678d1cecc76db023b` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:a8f881b60021468dbd96d9733606bd00f7f889ccb523d1773492a8301128e596` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:a8f881b60021468dbd96d9733606bd00f7f889ccb523d1773492a8301128e596` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:53ee105977b6440f1a7fe5088255a9c6e437c39b7c66e5cd4aba984a1667b25c` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:537d2018f414b825aa9995d2e15e0bdb0119e45f2c6fc10d326e3df6f49ef713` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:05da3347d457ca040cbe9b3e3d586d298a844f906b34ef7b6d768c247274ff1f` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:481cc43ba896a0d3291903af84120fa618130e2a2c8dce9b0ef23172b66858a8` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:8cb9d071a1e01dc3e63d5f1b1c040aa6fee94488a5bbd60f2c91704abfd921cc` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:da293ff5c49435c020044614962382040f41b6339ec83677301921a6dabbafb7` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:9677d5bbbbe0c73df93948d4ecf3f367830ef9e7cfb3b42557cf94ec514b6c68` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:a5109a6a659aa321892d4c6844e102ac72990fc2d58f32e45a072b291849fee8` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:f8f1aa8168660ee1c21dfa4a92530bcba6f1aeb765cee9087a6cc29d7c332a8a` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:450f0f75f26299a89a80efc3ce93b42d6447a32022aaf4f88edc935e56100191` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:7b18adf90e6db8f8e2c5955f38aa0adfbdbd10a9a95e2cf13035b9c5416000e8` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:ec3c238d0bfc3d26f20349ade1c4e19805b796f4bb3d5bf1fe4a9801b1ea1471` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:7b13613a9c5260e03ed831c79e5538633b4201867068ca0e1624b2c39fa8cf39` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:162c777447e3077438865332ac34df956be43c0429ce9962bcf5df9b210dbf01` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:8cdf28dc31d40a69eb6720fd42b8c19792f973c4e58760abbb6573c6129c81c1` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:3f9ec9201deca21f5e3e561d6dd673ee6fb2a7f13b4cae2985ffb69622994b99` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:c6de645816587116384ada93c02257f257a13a4b696e1bd8aeecebb9a9668f15` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:455ab4c9bc7c2457e2e48265065789a54513e07a1dc9e4bc108651f118f1570d` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:455ab4c9bc7c2457e2e48265065789a54513e07a1dc9e4bc108651f118f1570d` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:6ac24252194f91cd815736bd8be03fb95e0b965fabed5de4c631e99cd917da97` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:bf20ea91d922beb682e321a31cabb11ebec474f47edcf4e3787882e2a204b3b5` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:859bef31e5d882b508154ec00632e5e1e95bc8ea2dde6198f157703d759746c7` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:b6c81ab4bd0aba217977b0bd83a8a65f7c09b5954cda0870dea15aec0dbbe1ed` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:e216a1390a0d4d9f111c56c1d655f36614947eea18d6ec91a9f6d050048b1ad4` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:ba2042523ea1fff9d2c8b805ac36075169c3aecce0c965d09e326c06eab5a36f` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:fdbc8f59fc1c4b52c11d248ee9a5d7fe4e58343f036e558fbb33282e24d5b71f` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:08ea0ed61ac152dc5caea2d4cacc81175c272cb4a835eecaa7f8e7c5485740b7` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:40ff95e5fb92278e369b4f37d7dbb109431ecb115b1b9516aa887e6bb4fd030b` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:70cfe68a81ee860136cfaed35909f522c28c20ef5514c2d9d96c283892f8b7f5` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:9941cda0e65884900532e6a0ba68e475f373277105594bf09e67225450192d3c` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:c71d980dfc70575421d1589c74e8b3e7cc036551412d0ad0f89dbc543252a405` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:e5fbd98a70eb1dcf80c446b48b8f17e47ac12853bb255f0aed174c78196de257` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:9f57f9847f2372fa341cf037410ac68ada1c3075ab9b77cffbcf01d199f7c1f5` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:ef546532c582392e6ed47df55c0fbfa6dca6d3e523547089263b57354a4efb1a` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:116aefb76ddf39bed379c023c8260d2607314ad1b31ddef83ec2818ad9805a0b` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:6968fdefdd798adab48faeb40857c8cdca55712dbf4806703e11ccdfab874051` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:48add20e3c147fb4be26c948841a12736c8b10d053aa7d25984df8e4016e939f` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:ce5c055aedb3f9323f41a9de8d8f3dd23fb2ad0621d499f914f5cb3856e995f3` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:badc02f9ccdee13ab7dbd4e178bd5c57d332cc3acd2d4a9a3f889d317e0517be` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:763d4fe74b6f04a976482880eed76175854f659bb5bfcb315dce8ef69acead2e` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:73374363f9b69e03b8b9de34b319d7797876a3dae40bdce0830a67cf4bb4d4f2` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:317d6b5d69f56c9087cd1e8004e60a48841b997937dcdccc97e7c0b2e2ffb631` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:d1aaad1d5f32a910e245e6c117178c0703d39035e4053fe2dd2bb646fc02f7b8` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:0224ac3b2de11c4f6ef65ce0bdcd1b9c4112ea472b3bd5626fdff47a5185f54c` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:16c7384bfe210f30e09eae3542a58ff9bdbfa9253fdf4d380a53b37809f82c7d` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:5c7786c00a66346438ee4065e3eaa03ef9f8323ba839068344492b8a3b6d997a` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:6925744597c45eed8761a9597f3525f435dd420b67ff775a73211fdef9cd9cb2` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:b38a3f465062853b171d2bce6c6d8afa14d223e24bfd5ea0827e34c26a09a2c8` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:fa9555e2f520340457d5cebe469af40516237fb9398a5f90046565655b2862f8` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:d7eeca43e45d09a1c22611f865fb1f8b42673688a11a2acffd37a4e08a7fd8c4` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:ee7257c0179fbe015324b4d29f16fe93964e5f1901906240477fb1d820a500f2` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:dfa4effbf7d0ec6c9130c142241b3e247e226e13dc218fd44f986ca1c7fff2ed` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:dfa4effbf7d0ec6c9130c142241b3e247e226e13dc218fd44f986ca1c7fff2ed` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:263153fd6e05970e04af9a9bd95fb13591f0138ac030a632a6a78d95936afa4b` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:b8289bb550b9328d83d6a7ec93bdf9524087222f537a55db0b2eb5402c2bf663` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:af4bc0ef2211f69a92541bb14596341375e1003aef541aefcea7843192046b4c` |
-
-| Locales for v1.9.0 | Notes | Digest |
-||:|:-|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. | `sha256:2b19cfd2212d6517b286aa18617d2f9d1dd1520078b559cbbf9240599270d10` |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. | `sha256:6063aae5fb15c62b234cf945220916516a06ca81354c5311dee02af4d8cb0d3` |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. | `sha256:c6786916464755e64ffa64e69e8f3e7ef16115bac00bb6ea1e45368c42c58d1` |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. | `sha256:2a8a1accbf99e2746c9345b77e2f261e0111227312c402cc2e1cd8760cdc82a` |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. | `sha256:3e464356bb08c9c966af2b28a88ccafd591aecd2e37a0fedb356bd443720e8d` |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. | `sha256:b85c43080804103673ff99dddea644a516c4103e8b1f11fa3dd34857492cd40` |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. | `sha256:87b57ee61f964e4d72e75d860c499fa3b3d8dbda6a96c97d696beb20aa8b2a9` |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. | `sha256:ab1385b9746f4f054204302b9d564a433ae03748021b8ed71b4a3a224af1e9b` |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:82185a710c87f9dde678d88036867559ab3bf5f08f234d60d1548d3e106db57` |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. | `sha256:82185a710c87f9dde678d88036867559ab3bf5f08f234d60d1548d3e106db57` |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. | `sha256:56a1c63e7e6a0f5623ddc1f6a44ac6e51471d073e02e14e8c8b1e577930d816` |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. | `sha256:ccbbb09f29ff8f276e246037183c7a3e9a3eb5bf33a942b22205cce3c6857f2` |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. | `sha256:0c7374890f963e1ae9507e89dc9965a94723bd57802826c0677cd5262189783` |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. | `sha256:7430bf8eace8294ca085f36ea56399261b2b4f69027e86649e8f3868fc3d811` |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. | `sha256:0166ce1de3d669ea4ad80738c63369b7032125a54ecabade07241d740a94cfe` |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. | `sha256:50bed6a7bde9b793d307bcc3ace4c0f28d4a33c7a4dad9b3a394dc39a3e1c28` |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. | `sha256:50b800c0018a39609ddb1cee1b10062bf38a907644c393d20786db7c3ade748` |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. | `sha256:2aa79394dfeac8cec0cc1704a5199949cfccf347fe61161d02c7000c4ffcfa6` |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. | `sha256:7a3174b3aae5f10241e731d392b56f124808cdd506f881ced919ced73d836c0` |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. | `sha256:2457202fadb2354fc8d3666432096bd87c07760a4e3f4dbcc49853fff658577` |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. | `sha256:e4068cd7ca4272ea94819e2ba8743d2a76c8710b162db5e9ecbde6c92c12877` |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. | `sha256:9d63a0ed53ac06178ab84588551421c0e1d04b8bad3321410ebb99c3ca2a9e8` |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. | `sha256:67049c9ce591336655943f5030afcfdaa150a8aace7b372425a69cc33a6b7b9` |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. | `sha256:a95acf6874bf3df7ae8e96be779f80cb5405d21250227b0c4b3ddbcb3014082` |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. | `sha256:a95acf6874bf3df7ae8e96be779f80cb5405d21250227b0c4b3ddbcb3014082` |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. | `sha256:93cd49adaaa2a1bdfb06ab655be164ae66f206cb7c03a2cbd59e5fba70610ab` |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. | `sha256:7b788bfcaae4c63c274ca15924bfd861cfcafd5fec13f685d80babc25b2949d` |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. | `sha256:bfc87a77df5695ad43481348500fba8f6a7b495708fba200706049469b5ba97` |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. | `sha256:0b6c17aca75efb64aa9bfc0d83303038fe58d4b2fb1fc94c9380a4335b80796` |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. | `sha256:d6fcffc944c37a2dd0de29c39b82f3f8cce3a95ad925d2814ed7538335d5d4f` |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. | `sha256:a460bc53d9083d3c3770129995cf96cc1069ae4e8101f1739d304fe210f0af0` |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. | `sha256:5b7578fc5b00158dfa674d95a3f1d57f22eb285e8333b4006d1fe1808bda7ba` |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. | `sha256:03922fb017783c86d788c72e01c7ede440f8f3c913c86cab19bad4dfc2e4a2b` |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. | `sha256:146c1f98d6fa061016eba41db6e7b654eef222d37f35406d4b43477bb2ff897` |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. | `sha256:1ee2e53f12ad1c72665d2aef64e9d4a7f9ea05670cad84dcae5e75409494f32` |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. | `sha256:a21d25d3ac699af4e9ba9194aadd9b45f35fd9205224f3429a4c7da41fc38fe` |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. | `sha256:216125a9bd89a95d3c4dc2d7e031398659427b3aa7d4663d23a65737972e42b` |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. | `sha256:795a698120eecbd80c48e738f73300739c1698ca859130ddb4236317bcdf70f` |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. | `sha256:f6eb70d523c435c2e3a713b32a8af4a781df7ec043caad2fc7f458ee341eb2f` |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. | `sha256:28864c662a20f459b3051b1da2967a605e06267e6408285f7c2552748cf4eed` |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. | `sha256:eaa834bac6b69abef096b36a8baead741db78fe438af3d30f60abde3631d639` |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. | `sha256:cfea0fa7cce9cc512f2fbb8b76f1c00fe5c32fad853c90b15934cf4ee6262fa` |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. | `sha256:afbd6cc0413f3a3c9f6df044b6df6d9dac9e8e888c2cb619fefbdc3e105c644` |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. | `sha256:afbd6cc0413f3a3c9f6df044b6df6d9dac9e8e888c2cb619fefbdc3e105c644` |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. | `sha256:86683597c62752b4d769b69e5294979fafd4c277aaef1536e1cb19f9f06c0bf` |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. | `sha256:aa64eed28ca2ad060e2e02188e0401bf34e4caf7e2182b70a30ce33b3c11c9c` |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. | `sha256:0e1394d231a57a1df8163ccb634dc2ef2f8103b10608a40ab3efc5c0fbe9ded` |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. | `sha256:eef97f2817fc24405823a5fe4e825244db32279b44c0e6631e8ad9a5c1acf40` |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. | `sha256:ebc331b0685f482d2f55619fa81fd451fd7c8f107f9cd7ad159bc6213ae4e33` |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. | `sha256:e9cb7dfd2eec154c8f3d530c16b66e8558c5955a2edaede69740067f00e43cf` |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. | `sha256:93ce2ef6177c0d8ac70b61df8b11fcbcdfd3c0be0cc51cd8644f26679a741c2` |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. | `sha256:6a18bae69ac63b42ba992b8b74d8d31d91ca984d61b5f62f38be988cf38645e` |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. | `sha256:7a48252d4ada2af43f9266a70113426d330bac192348cbdc929022295a0e727` |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. | `sha256:90e2ecac14f8e960934fd013d208fc2a0afe1bfff037d5648d422bda8d8a76e` |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. | `sha256:217b61bd6244b5effda8f12a2c563ce1b4572e9c5b8a08df143665f9ff754e4` |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. | `sha256:fbff48dfc9dfadadf377867b28f6e3a3bd605e59da20f77a531efcc7d85d16e` |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. | `sha256:856a033a09925773fa4b4531e199ab7c03c537f366acecbda60f8d21735725e` |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. | `sha256:2d1ec975f1aee56a6fc6039d154fb3f2fbeb4636f7078c5dfe99aeddb6a3634` |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. | `sha256:b7d629f37ab3305274764264dc08fab5236e60ef18d40e987618115db67ce44` |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. | `sha256:8b380ae7e4aac9d4ada4d15fa9e667387bc9ca038796d9b6999953bfbc97259` |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. | `sha256:b00ca7f1411169a5baf7263a8d7e5eed1a72084d9489eaf458429dfc338564a` |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. | `sha256:31c588c31e3ac67305af66091e7756dfc4ca454317d0228116ea0b2fedf5d71` |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. | `sha256:e76437f8da7c279b38d2643defc997a13b4a364e9a212895cdb33a9a3f6457f` |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. | `sha256:461c1efa6cce0b10a87f338bc637aca76aef8458061a688870fb3343d682da0` |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. | `sha256:7fb0cfab4c0fe2913eb20f28a25c6663015d62f82e7e7864d9f7fac2d27697b` |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. | `sha256:5336173d410e10ffeb5dc211a583887e33754319c757914955057d398dfbb0a` |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. | `sha256:5dc8cdcc3054386bf69596707d9d261d4db5bfd09f1882ceb4e29238a34b24e` |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. | `sha256:74ea485f23e4c1fe0029e06894860aa0188c36c0e14ea3584a06d4216ccef56` |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. | `sha256:ff2977a98ef691da543db08be9cfe04d7fc3bf8f78b29310c163e47303b2ddd` |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. | `sha256:ba7e2c0e5e75d9f2b52aa50c97728616c43e81f48c15e24665e4c2ea5770a8f` |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. | `sha256:375a8ceae89ea1f0dda551feff30ae3679231189b527992edbc49988d042d66` |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. | `sha256:b6f82148295b38b4039c45c48695ec50b4e97cd02b18d49c39bf9fca3bec958` |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. | `sha256:3e773931f3adaac92cba43773a241692a2b471ebe73ec51c475df8ff63b7ee1` |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. | `sha256:05fc0d5075a1094caf70d98b4a9469952be52cb6eb4d9f7b9ff4ae961100c7b` |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. | `sha256:d7613bcefc48e85b9d6f07c8cd223c16d4958bcf7f24087575250e97c593ac1` |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. | `sha256:efe22bc123dac9312dcaeb859a377d81f61fbb25ef46e4678d36ec6bebc5d32` |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. | `sha256:802c60bc65012c03ffe96268dca79b8c6dcd0c5cc6180ec271c50ef5c9ba132` |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. | `sha256:802c60bc65012c03ffe96268dca79b8c6dcd0c5cc6180ec271c50ef5c9ba132` |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. | `sha256:95d58922463d577d4c4722ab722a5768af35fb62236d47f6709717dea758909` |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. | `sha256:33eec6e3aaaedafaf3969746eeaf97a1760e763505decfe2abaa03f5054bfd2` |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. | `sha256:456db2898b2e5a9c30b7071ce6ea3f141438cbf1aa4899c7ffccfc2f0dde5bd` |
-
-| Locales for v1.8.0 | Notes |
-||:|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. |
-
-| Locales for v1.7.0 | Notes |
-||:|
-| `ar-eg-hoda` | Container image with the `ar-EG` locale and `ar-EG-Hoda` voice. |
-| `ar-sa-naayf` | Container image with the `ar-SA` locale and `ar-SA-Naayf` voice. |
-| `bg-bg-ivan` | Container image with the `bg-BG` locale and `bg-BG-Ivan` voice. |
-| `ca-es-herenarus` | Container image with the `ca-ES` locale and `ca-ES-HerenaRUS` voice. |
-| `cs-cz-jakub` | Container image with the `cs-CZ` locale and `cs-CZ-Jakub` voice. |
-| `da-dk-hellerus` | Container image with the `da-DK` locale and `da-DK-HelleRUS` voice. |
-| `de-at-michael` | Container image with the `de-AT` locale and `de-AT-Michael` voice. |
-| `de-ch-karsten` | Container image with the `de-CH` locale and `de-CH-Karsten` voice. |
-| `de-de-hedda` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. |
-| `de-de-heddarus` | Container image with the `de-DE` locale and `de-DE-Hedda` voice. |
-| `de-de-stefan-apollo` | Container image with the `de-DE` locale and `de-DE-Stefan-Apollo` voice. |
-| `el-gr-stefanos` | Container image with the `el-GR` locale and `el-GR-Stefanos` voice. |
-| `en-au-catherine` | Container image with the `en-AU` locale and `en-AU-Catherine` voice. |
-| `en-au-hayleyrus` | Container image with the `en-AU` locale and `en-AU-HayleyRUS` voice. |
-| `en-ca-heatherrus` | Container image with the `en-CA` locale and `en-CA-HeatherRUS` voice. |
-| `en-ca-linda` | Container image with the `en-CA` locale and `en-CA-Linda` voice. |
-| `en-gb-george-apollo` | Container image with the `en-GB` locale and `en-GB-George-Apollo` voice. |
-| `en-gb-hazelrus` | Container image with the `en-GB` locale and `en-GB-HazelRUS` voice. |
-| `en-gb-susan-apollo` | Container image with the `en-GB` locale and `en-GB-Susan-Apollo` voice. |
-| `en-ie-sean` | Container image with the `en-IE` locale and `en-IE-Sean` voice. |
-| `en-in-heera-apollo` | Container image with the `en-IN` locale and `en-IN-Heera-Apollo` voice. |
-| `en-in-priyarus` | Container image with the `en-IN` locale and `en-IN-PriyaRUS` voice. |
-| `en-in-ravi-apollo` | Container image with the `en-IN` locale and `en-IN-Ravi-Apollo` voice. |
-| `en-us-benjaminrus` | Container image with the `en-US` locale and `en-US-BenjaminRUS` voice. |
-| `en-us-guy24krus` | Container image with the `en-US` locale and `en-US-Guy24kRUS` voice. |
-| `en-us-aria24krus` | Container image with the `en-US` locale and `en-US-Aria24kRUS` voice. |
-| `en-us-ariarus` | Container image with the `en-US` locale and `en-US-AriaRUS` voice. |
-| `en-us-zirarus` | Container image with the `en-US` locale and `en-US-ZiraRUS` voice. |
-| `es-es-helenarus` | Container image with the `es-ES` locale and `es-ES-HelenaRUS` voice. |
-| `es-es-laura-apollo` | Container image with the `es-ES` locale and `es-ES-Laura-Apollo` voice. |
-| `es-es-pablo-apollo` | Container image with the `es-ES` locale and `es-ES-Pablo-Apollo` voice. |
-| `es-mx-hildarus` | Container image with the `es-MX` locale and `es-MX-HildaRUS` voice. |
-| `es-mx-raul-apollo` | Container image with the `es-MX` locale and `es-MX-Raul-Apollo` voice. |
-| `fi-fi-heidirus` | Container image with the `fi-FI` locale and `fi-FI-HeidiRUS` voice. |
-| `fr-ca-caroline` | Container image with the `fr-CA` locale and `fr-CA-Caroline` voice. |
-| `fr-ca-harmonierus` | Container image with the `fr-CA` locale and `fr-CA-HarmonieRUS` voice. |
-| `fr-ch-guillaume` | Container image with the `fr-CH` locale and `fr-CH-Guillaume` voice. |
-| `fr-fr-hortenserus` | Container image with the `fr-FR` locale and `fr-FR-HortenseRUS` voice. |
-| `fr-fr-julie-apollo` | Container image with the `fr-FR` locale and `fr-FR-Julie-Apollo` voice. |
-| `fr-fr-paul-apollo` | Container image with the `fr-FR` locale and `fr-FR-Paul-Apollo` voice. |
-| `he-il-asaf` | Container image with the `he-IL` locale and `he-IL-Asaf` voice. |
-| `hi-in-hemant` | Container image with the `hi-IN` locale and `hi-IN-Hemant` voice. |
-| `hi-in-kalpana-apollo` | Container image with the `hi-IN` locale and `hi-IN-Kalpana-Apollo` voice. |
-| `hi-in-kalpana` | Container image with the `hi-IN` locale and `hi-IN-Kalpana` voice. |
-| `hr-hr-matej` | Container image with the `hr-HR` locale and `hr-HR-Matej` voice. |
-| `hu-hu-szabolcs` | Container image with the `hu-HU` locale and `hu-HU-Szabolcs` voice. |
-| `id-id-andika` | Container image with the `id-ID` locale and `id-ID-Andika` voice. |
-| `it-it-cosimo-apollo` | Container image with the `it-IT` locale and `it-IT-Cosimo-Apollo` voice. |
-| `it-it-luciarus` | Container image with the `it-IT` locale and `it-IT-LuciaRUS` voice. |
-| `ja-jp-ayumi-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ayumi-Apollo` voice. |
-| `ja-jp-harukarus` | Container image with the `ja-JP` locale and `ja-JP-HarukaRUS` voice. |
-| `ja-jp-ichiro-apollo` | Container image with the `ja-JP` locale and `ja-JP-Ichiro-Apollo` voice. |
-| `ko-kr-heamirus` | Container image with the `ko-KR` locale and `ko-KR-HeamiRUS` voice. |
-| `ms-my-rizwan` | Container image with the `ms-MY` locale and `ms-MY-Rizwan` voice. |
-| `nb-no-huldarus` | Container image with the `nb-NO` locale and `nb-NO-HuldaRUS` voice. |
-| `nl-nl-hannarus` | Container image with the `nl-NL` locale and `nl-NL-HannaRUS` voice. |
-| `pl-pl-paulinarus` | Container image with the `pl-PL` locale and `pl-PL-PaulinaRUS` voice. |
-| `pt-br-daniel-apollo` | Container image with the `pt-BR` locale and `pt-BR-Daniel-Apollo` voice. |
-| `pt-br-heloisarus` | Container image with the `pt-BR` locale and `pt-BR-HeloisaRUS` voice. |
-| `pt-pt-heliarus` | Container image with the `pt-PT` locale and `pt-PT-HeliaRUS` voice. |
-| `ro-ro-andrei` | Container image with the `ro-RO` locale and `ro-RO-Andrei` voice. |
-| `ru-ru-ekaterinarus` | Container image with the `ru-RU` locale and `ru-RU-EkaterinaRUS` voice. |
-| `ru-ru-irina-apollo` | Container image with the `ru-RU` locale and `ru-RU-Irina-Apollo` voice. |
-| `ru-ru-pavel-apollo` | Container image with the `ru-RU` locale and `ru-RU-Pavel-Apollo` voice. |
-| `sk-sk-filip` | Container image with the `sk-SK` locale and `sk-SK-Filip` voice. |
-| `sl-si-lado` | Container image with the `sl-SI` locale and `sl-SI-Lado` voice. |
-| `sv-se-hedvigrus` | Container image with the `sv-SE` locale and `sv-SE-HedvigRUS` voice. |
-| `ta-in-valluvar` | Container image with the `ta-IN` locale and `ta-IN-Valluvar` voice. |
-| `te-in-chitra` | Container image with the `te-IN` locale and `te-IN-Chitra` voice. |
-| `th-th-pattara` | Container image with the `th-TH` locale and `th-TH-Pattara` voice. |
-| `tr-tr-sedarus` | Container image with the `tr-TR` locale and `tr-TR-SedaRUS` voice. |
-| `vi-vn-an` | Container image with the `vi-VN` locale and `vi-VN-An` voice. |
-| `zh-cn-huihuirus` | Container image with the `zh-CN` locale and `zh-CN-HuihuiRUS` voice. |
-| `zh-cn-kangkang-apollo` | Container image with the `zh-CN` locale and `zh-CN-Kangkang-Apollo` voice. |
-| `zh-cn-yaoyao-apollo` | Container image with the `zh-CN` locale and `zh-CN-Yaoyao-Apollo` voice. |
-| `zh-hk-danny-apollo` | Container image with the `zh-HK` locale and `zh-HK-Danny-Apollo` voice. |
-| `zh-hk-tracy-apollo` | Container image with the `zh-HK` locale and `zh-HK-Tracy-Apollo` voice. |
-| `zh-hk-tracyrus` | Container image with the `zh-HK` locale and `zh-HK-TracyRUS` voice. |
-| `zh-tw-hanhanrus` | Container image with the `zh-TW` locale and `zh-TW-HanHanRUS` voice. |
-| `zh-tw-yating-apollo` | Container image with the `zh-TW` locale and `zh-TW-Yating-Apollo` voice. |
-| `zh-tw-zhiwei-apollo` | Container image with the `zh-TW` locale and `zh-TW-Zhiwei-Apollo` voice. |
--- ## Neural Text-to-speech The [Neural Text-to-speech][sp-ntts] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/speechservices/` repository and is named `neural-text-to-speech`. The fully qualified container image name is, `mcr.microsoft.com/azure-cognitive-services/speechservices/neural-text-to-speech`.
Release notes for `3.0.015490002-onprem-amd64`:
## Translator
-The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation`.
+The [Translator][tr-containers] container image can be found on the `mcr.microsoft.com` container registry syndicate. It resides within the `azure-cognitive-services/translator` repository and is named `text-translation`. The fully qualified container image name is `mcr.microsoft.com/azure-cognitive-services/translator/text-translation:1.0.018950002-amd64-preview`.
This container image has the following tags available. | Image Tags | Notes | |-|:|
-| `latest` | |
+| `1.0.018950002-amd64-preview` | |
[ad-containers]: ../anomaly-Detector/anomaly-detector-container-howto.md
cognitive-services Deploy Query Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/conversational-language-understanding/how-to/deploy-query-model.md
Previously updated : 01/26/2022 Last updated : 03/15/2022 ms.devlang: csharp, python
After you have [trained a model](./train-model.md) on your dataset, you're ready to deploy it. After deploying your model, you'll be able to query it for predictions.
+> [!Tip]
+> Before deploying a model, make sure to view the model details to make sure that the model is performing as expected.
+ ## Deploy model
-Deploying a model is to host it and make it available for predictions through an endpoint. You can only have 1 deployed model per project, deploying another one will overwrite the previously deployed model.
+Deploying a model hosts and makes it available for predictions through an endpoint.
When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated to it.
-Simply select a model and click on deploy model in the Deploy model page.
+### Conversation projects deployments
+
+1. Click on *Add deployment* to submit a new deployment job
+
+ :::image type="content" source="../media/add-deployment-model.png" alt-text="A screenshot showing the model deployment button in Language Studio." lightbox="../media/add-deployment-model.png":::
+
+2. In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name.
+
+ :::image type="content" source="../media/create-deployment-job.png" alt-text="A screenshot showing the add deployment job screen in Language Studio." lightbox="../media/create-deployment-job.png":::
++
+#### Swap deployments
+If you would like to swap the models between two deployments, simply select the two deployment names you want to swap and click on **Swap deployments**. From the window that appears, select the deployment name you want to swap with.
++
+#### Delete deployment
+
+To delete a deployment, select the deployment you want to delete and click on **Delete deployment**.
> [!TIP] > If you're using the REST API, see the [quickstart](../quickstart.md?pivots=rest-api#deploy-your-model) and REST API [reference documentation](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-clu-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob) for examples and more information.
-**Orchestration workflow projects deployments**
+> [!NOTE]
+> You can only have ten deployment names.
+
+### Orchestration workflow projects deployments
+
+1. Click on **Add deployment** to submit a new deployment job.
+
+ Like conversation projects, In the window that appears, you can create a new deployment name by giving the deployment a name or override an existing deployment name. Then, you can add a trained model to this deployment name and press next.
-When you're deploying an orchestration workflow project, A small window will show up for you to confirm your deployment, and configure parameters for connected services.
+ :::image type="content" source="../media/create-deployment-job-orch.png" alt-text="A screenshot showing deployment job creation in Language Studio." lightbox="../media/create-deployment-job-orch.png":::
-If you're connecting one or more LUIS applications, specify the deployment name, and whether you're using *slot* or *version* type deployment.
-* The *slot* deployment type requires you to pick between a production and staging slot.
-* The *version* deployment type requires you to specify the version you have published.
+2. If you're connecting one or more LUIS applications or conversational language understanding projects, specify the deployment name.
-No configurations are required for custom question answering and conversational language understanding connections, or unlinked intents.
+ No configurations are required for custom question answering or unlinked intents.
-LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
+ LUIS projects **must be published** to the slot configured during the Orchestration deployment, and custom question answering KBs must also be published to their Production slots.
+ :::image type="content" source="../media/deploy-connected-services.png" alt-text="A screenshot showing the deployment screen for orchestration workflow projects." lightbox="../media/deploy-connected-services.png":::
## Send a Conversational Language Understanding request
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-classification/how-to/call-api.md
Previously updated : 01/07/2022 Last updated : 03/15/2022
See the [application development lifecycle](../overview.md#project-development-l
## Deploy your model
-After your model is [trained](train-model.md), you can deploy it. Deploying your model lets you start using it to classify text. You can deploy your model using the [REST API](https://westus2.dev.cognitive.microsoft.com/docs/services/language-authoring-apis-2021-11-01-preview/operations/Deployments_TriggerDeploymentJob) or Language Studio. To use Language Studio, see the steps below:
+Deploying a model hosts it and makes it available for predictions through an endpoint.
+++
+When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated with it.
+
+> [!NOTE]
+> You can only have ten deployment names.
[!INCLUDE [Deploy a model using Language Studio](../includes/deploy-model-language-studio.md)]
+
+### Delete deployment
-If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
+To delete a deployment, select the deployment you want to delete and click **Delete deployment**
> [!TIP] > You can [test your model in Language Studio](../quickstart.md?pivots=language-studio#test-your-model) by sending samples of text for it to classify.
If you deploy your model through the Language Studio, your `deployment-name` is
5. In the response header you receive extract `jobId` from `operation-location`, which has the format: `{YOUR-ENDPOINT}/text/analytics/v3.2-preview.2/analyze/jobs/<jobId}>`
-6. Copy the retrieve request and replace `<OPERATION-ID>` with `jobId` received form last step and submit the request.
+6. Copy the retrieve request and replace `<OPERATION-ID>` with `jobId` received from the last step and submit the request.
:::image type="content" source="../media/get-prediction-url-3.png" alt-text="run-inference-3" lightbox="../media/get-prediction-url-3.png":::
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/custom-named-entity-recognition/how-to/call-api.md
See the [application development lifecycle](../overview.md#application-developme
## Deploy your model
-Go to your project in [Language studio](https://aka.ms/custom-extraction).
+Deploying a model hosts it, and makes it available for predictions through an endpoint.
+
+When a model is deployed, you will be able to test the model directly in the portal or by calling the API associated with it.
+
+> [!NOTE]
+> You can only have ten deployment names
[!INCLUDE [Deploy a model using Language Studio](../includes/deploy-model-language-studio.md)]
+
+### Delete deployment
-If you deploy your model through the Language Studio, your `deployment-name` is `prod`.
+To delete a deployment, select the deployment you want to delete and select **Delete deployment**
> [!TIP] > You can test your model in Language Studio by sending samples of text for it to classify.
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/how-to/call-api.md
Previously updated : 03/01/2022 Last updated : 03/16/2022
Extractive summarization returns a rank score as a part of the system response a
There is another feature in Azure Cognitive Service for Language, [key phrases extraction](./../../key-phrase-extraction/how-to/call-api.md), that can extract key information. When deciding between key phrase extraction and extractive summarization, consider the following: * key phrase extraction returns phrases while extractive summarization returns sentences
-* extractive summarization returns sentences together with a rank score. Top ranked sentences will be returned per request
+* extractive summarization returns sentences together with a rank score, and. Top ranked sentences will be returned per request
+* extractive summarization also returns the following positional information:
+ * offset: The start position of each extracted sentence, and
+ * Length: is the length of each extracted sentence.
+ ## Determine how to process the data (optional)
Using the above example, the API might return the following summarized sentences
*"At the intersection of all three, thereΓÇÖs magicΓÇöwhat we call XYZ-code as illustrated in Figure 1ΓÇöa joint representation to create more powerful AI that can speak, hear, see, and understand humans better."* - ## Service and data limits [!INCLUDE [service limits article](../../includes/service-limits-link.md)]
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-summarization/overview.md
Previously updated : 03/01/2022 Last updated : 03/16/2022
Text summarization supports the following features:
* **Extracted sentences**: These sentences collectively convey the main idea of the document. TheyΓÇÖre original sentences extracted from the input documentΓÇÖs content. * **Rank score**: The rank score indicates how relevant a sentence is to a document's main topic. Text summarization ranks extracted sentences, and you can determine whether they're returned in the order they appear, or according to their rank. * **Maximum sentences**: Determine the maximum number of sentences to be returned. For example, if you request a three-sentence summary Text summarization will return the three highest scored sentences.
+* **Positional information**: The start position and length of extracted sentences.
## Get started with text summarization
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/overview.md
Azure Communication Services has many samples available, which you can use to te
| Sample Name | Description | Languages/Platforms Available | | : | : | : | | [Calling Hero Sample](./calling-hero-sample.md) | Provides a sample of creating a calling application. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-hero), [iOS](https://github.com/Azure-Samples/communication-services-ios-calling-hero), [Android](https://github.com/Azure-Samples/communication-services-android-calling-hero) |
-| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | [Web](https://github.com/Azure-Samples/communication-services-web-chat-hero) | |
+| [Chat Hero Sample](./chat-hero-sample.md) | Provides a sample of creating a chat application. | [Web](https://github.com/Azure-Samples/communication-services-web-chat-hero) |
| [Trusted Authentication Server Sample](./trusted-auth-sample.md) | Provides a sample implementation of a trusted authentication service used to generate user and access tokens for Azure Communication Services. The service by default maps generated identities to Azure Active Directory | [node.JS](https://github.com/Azure-Samples/communication-services-authentication-hero-nodejs), [C#](https://github.com/Azure-Samples/communication-services-authentication-hero-csharp) | [Web Calling Sample](./web-calling-sample.md) | A step by step walk-through of ACS Calling features, including PSTN, within the Web. | [Web](https://github.com/Azure-Samples/communication-services-web-calling-tutorial/) | | [Network Traversal Sample]( https://github.com/Azure-Samples/communication-services-network-traversal-hero) | Sample app demonstrating network traversal functionality | Node.js
cosmos-db Choose Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/choose-api.md
Last updated 12/08/2021+ # Choose an API in Azure Cosmos DB
cosmos-db Consistency Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/consistency-levels.md
Last updated 02/17/2022+ # Consistency levels in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
cosmos-db Continuous Backup Restore Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/continuous-backup-restore-introduction.md
Last updated 11/22/2021 ---++ # Continuous backup with point-in-time restore in Azure Cosmos DB
cosmos-db Gremlin Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/graph/gremlin-headers.md
using (GremlinClient client = new GremlinClient(server, new GraphSON2Reader(), n
} ```
-An example that demonstrates how to read status attribute from Gremlin java client:
+An example that demonstrates how to read status attribute from Gremlin Java client:
```java try {
cosmos-db How Pricing Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-pricing-works.md
Last updated 12/07/2021+ # Pricing model in Azure Cosmos DB
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/introduction.md
Last updated 08/26/2021-+ # Welcome to Azure Cosmos DB
cosmos-db Mongodb Indexing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/mongodb-indexing.md
Last updated 10/13/2021 -+ # Manage indexing in Azure Cosmos DB API for MongoDB [!INCLUDE[appliesto-mongodb-api](../includes/appliesto-mongodb-api.md)]
cosmos-db Partitioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/partitioning-overview.md
Last updated 02/08/2022-+ # Partitioning and horizontal scaling in Azure Cosmos DB
cosmos-db Request Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/request-units.md
Last updated 08/26/2021--+ # Request Units in Azure Cosmos DB [!INCLUDE[appliesto-all-apis](includes/appliesto-all-apis.md)]
cosmos-db Change Feed Design Patterns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/change-feed-design-patterns.md
Last updated 08/26/2021+ # Change feed design patterns in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Modeling Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/modeling-data.md
Last updated 02/15/2022-+ # Data modeling in Azure Cosmos DB [!INCLUDE[appliesto-sql-api](../includes/appliesto-sql-api.md)]
cosmos-db Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/use-cases.md
Last updated 05/21/2019+ # Common Azure Cosmos DB use cases
data-factory Store Credentials In Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/store-credentials-in-key-vault.md
Last updated 01/21/2022
-# Store credential in Azure Key Vault
+# Store credentials in Azure Key Vault
[!INCLUDE[appliesto-adf-asa-md](includes/appliesto-adf-asa-md.md)]
data-factory Transform Data Using Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-using-spark.md
The following table describes the JSON properties used in the JSON definition:
| getDebugInfo | Specifies when the Spark log files are copied to the Azure storage used by HDInsight cluster (or) specified by sparkJobLinkedService. Allowed values: None, Always, or Failure. Default value: None. | No | ## Folder structure
-Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
+Spark jobs are more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the Java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
Create the following folder structure in the Azure Blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate sub folders in the root folder represented by **entryFilePath**. For example, upload Python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the service expects the following folder structure in the Azure Blob storage:
Create the following folder structure in the Azure Blob storage referenced by th
| | - | -- | | | `.` (root) | The root path of the Spark job in the storage linked service | Yes | Folder | | &lt;user defined &gt; | The path pointing to the entry file of the Spark job | Yes | File |
-| ./jars | All files under this folder are uploaded and placed on the java classpath of the cluster | No | Folder |
+| ./jars | All files under this folder are uploaded and placed on the Java classpath of the cluster | No | Folder |
| ./pyFiles | All files under this folder are uploaded and placed on the PYTHONPATH of the cluster | No | Folder | | ./files | All files under this folder are uploaded and placed on executor working directory | No | Folder | | ./archives | All files under this folder are uncompressed | No | Folder |
data-factory Data Factory Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/v1/data-factory-spark.md
The following table describes the JSON properties used in the JSON definition.
| sparkJobLinkedService | The Storage linked service that holds the Spark job file, dependencies, and logs. If you don't specify a value for this property, the storage associated with the HDInsight cluster is used. | No | ## Folder structure
-The Spark activity doesn't support an inline script as Pig and Hive activities do. Spark jobs are also more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
+The Spark activity doesn't support an inline script as Pig and Hive activities do. Spark jobs are also more extensible than Pig/Hive jobs. For Spark jobs, you can provide multiple dependencies such as jar packages (placed in the Java CLASSPATH), Python files (placed on the PYTHONPATH), and any other files.
Create the following folder structure in the blob storage referenced by the HDInsight linked service. Then, upload dependent files to the appropriate subfolders in the root folder represented by **entryFilePath**. For example, upload Python files to the pyFiles subfolder and jar files to the jars subfolder of the root folder. At runtime, the Data Factory service expects the following folder structure in the blob storage:
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td>New Stringify data transformation in mapping data flows</td><td>Mapping data flows adds a new data transformation called Stringify to make it easy to convert complex data types like structs and arrays into string form that can be sent to structured output destinations.<br><a href="data-flow-stringify.md">Learn more</a></td></tr> <tr>
- <td rowspan=2><b>Integration Runtime</b></td>
- <td>Azure Data Factory Managed vNet goes GA</td>
- <td>You can now provision the Azure Integration Runtime as part of a managed Virtual Network and leverage Private Endpoints to securely connect to supported data stores. Data traffic goes through Azure Private Links which provide secured connectivity to the data source. In addition, it prevents data exfiltration to the public internet.<br><a href="managed-virtual-network-private-endpoint.md">Learn more</a></td>
- </tr>
- <tr>
- <td>Express VNet injection for SSIS integration runtime (Public Preview)</td>
- <td>The SSIS integration runtime now supports express VNet injection.<br>
- Learn more:<br>
- <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of VNet injection for SSIS integration runtime</a><br>
- <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express VNet injection for SSIS integration runtime</a><br>
- <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express VNet injection for SSIS integration runtime</a>
- </td>
- </tr>
+ <td><b>Integration Runtime</b></td>
+ <td>Express VNet injection for SSIS integration runtime (Public Preview)</td>
+ <td>The SSIS integration runtime now supports express VNet injection.<br>
+ Learn more:<br>
+ <a href="join-azure-ssis-integration-runtime-virtual-network.md">Overview of VNet injection for SSIS integration runtime</a><br>
+ <a href="azure-ssis-integration-runtime-virtual-network-configuration.md">Standard vs. express VNet injection for SSIS integration runtime</a><br>
+ <a href="azure-ssis-integration-runtime-express-virtual-network-injection.md">Express VNet injection for SSIS integration runtime</a>
+ </td>
+</tr>
<tr><td rowspan=2><b>Security</b></td><td>Azure Key Vault integration improvement</td><td>We have improved Azure Key Vault integration by adding user selectable drop-downs to select the secret values in the linked service, increasing productivity and not requiring users to type in the secrets, which could result in human error.</td></tr> <tr><td>Support for user-assigned managed identity in Azure Data Factory</td><td>Credential safety is crucial for any enterprise. With that in mind, the Azure Data Factory (ADF) team is committed to making the data engineering process secure yet simple for data engineers. We are excited to announce the support for user-assigned managed identity (Preview) in all connectors/ linked services that support Azure Active Directory (Azure AD) based authentication.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory/support-for-user-assigned-managed-identity-in-azure-data-factory/ba-p/2841013">Learn more</a></td></tr>
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
Title: Adaptive network hardening in Microsoft Defender for Cloud | Microsoft Docs description: Learn how to use actual traffic patterns to harden your network security groups (NSG) rules and further improve your security posture. ++ Last updated 11/09/2021 # Improve your network security posture with adaptive network hardening
defender-for-cloud Alert Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alert-validation.md
Title: Alert validation in Microsoft Defender for Cloud | Microsoft Docs description: Learn how to validate that your security alerts are correctly configured in Microsoft Defender for Cloud ++ Last updated 12/12/2021
defender-for-cloud Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-overview.md
Title: Security alerts and incidents in Microsoft Defender for Cloud description: Learn how Microsoft Defender for Cloud generates security alerts and correlates them into incidents. ++ Last updated 11/09/2021 # Security alerts and incidents in Microsoft Defender for Cloud
defender-for-cloud Alerts Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-reference.md
Title: Reference table for all security alerts in Microsoft Defender for Cloud description: This article lists the security alerts visible in Microsoft Defender for Cloud ++ Last updated 03/10/2022 # Security alerts - a reference guide
defender-for-cloud Alerts Schemas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-schemas.md
Title: Schemas for the Microsoft Defender for Cloud alerts description: This article describes the different schemas used by Microsoft Defender for Cloud for security alerts. ++ Last updated 11/09/2021
defender-for-cloud Alerts Suppression Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/alerts-suppression-rules.md
Title: Using alerts suppression rules to suppress false positives or other unwan
description: This article explains how to use Microsoft Defender for Cloud's suppression rules to hide unwanted security alerts Last updated 11/09/2021 ++ # Suppress alerts from Microsoft Defender for Cloud
defender-for-cloud Apply Security Baseline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/apply-security-baseline.md
Title: Harden your Windows and Linux OS with Azure security baseline and Microsoft Defender for Cloud description: Learn how Microsoft Defender for Cloud uses the guest configuration to compare your OS hardening with the guidance from Azure Security Benchmark ++ Last updated 11/09/2021
defender-for-cloud Asset Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/asset-inventory.md
Title: Microsoft Defender for Cloud's asset inventory
description: Learn about Microsoft Defender for Cloud's asset management experience providing full visibility over all your Defender for Cloud monitored resources. Last updated 11/09/2021 ++ # Use asset inventory to manage your resources' security posture
defender-for-cloud Auto Deploy Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/auto-deploy-vulnerability-assessment.md
Title: Configure Microsoft Defender for Cloud to automatically assess machines for vulnerabilities description: Use Microsoft Defender for Cloud to ensure your machines have a vulnerability assessment solution ++ Last updated 11/09/2021
defender-for-cloud Concept Defender For Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-defender-for-cosmos.md
Title: Overview of Defender for Azure Cosmos DB
description: Learn about the benefits and features of Microsoft Defender for Azure Cosmos DB. ++ Last updated 03/01/2022
defender-for-cloud Configure Email Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/configure-email-notifications.md
Title: Configure email notifications for Microsoft Defender for Cloud alerts description: Learn how to fine-tune the Microsoft Defender for Cloud security alert emails. ++ Last updated 11/09/2021
Use Defender for Cloud's **Email notifications** settings page to define prefere
To avoid alert fatigue, Defender for Cloud limits the volume of outgoing mails. For each subscription, Defender for Cloud sends: -- a maximum of one email per **6 hours** (4 emails per day) for **high-severity** alerts-- a maximum of one email per **12 hours** (2 emails per day) for **medium-severity** alerts-- a maximum of one email per **24 hours** for **low-severity** alerts
+- approximately **four emails per day** for **high-severity** alerts
+- approximately **two emails per day** for **medium-severity** alerts
+- approximately **one email per day** for **low-severity** alerts
:::image type="content" source="./media/configure-email-notifications/email-notification-settings.png" alt-text="Configuring the details of the contact who will receive emails about security alerts." :::
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
Title: Continuous export can send Microsoft Defender for Cloud's alerts and recommendations to Log Analytics workspaces or Azure Event Hubs description: Learn how to configure continuous export of security alerts and recommendations to Log Analytics workspaces or Azure Event Hubs ++ Last updated 12/09/2021
defender-for-cloud Cross Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/cross-tenant-management.md
description: Learn how to set up cross-tenant management to manage the security
documentationcenter: na ms.assetid: 7d51291a-4b00-4e68-b872-0808b60e6d9c ++ na Last updated 11/09/2021
defender-for-cloud Custom Dashboards Azure Workbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-dashboards-azure-workbooks.md
Title: Workbooks gallery in Microsoft Defender for Cloud description: Learn how to create rich, interactive reports of your Microsoft Defender for Cloud data with the integrated Azure Monitor Workbooks gallery ++ Last updated 01/23/2022
defender-for-cloud Custom Security Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/custom-security-policies.md
Title: Create custom security policies in Microsoft Defender for Cloud | Microsoft Docs description: Azure custom policy definitions monitored by Microsoft Defender for Cloud. ++ Last updated 12/23/2021 zone_pivot_groups: manage-asc-initiatives
defender-for-cloud Data Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/data-security.md
Title: Microsoft Defender for Cloud data security | Microsoft Docs description: Learn how data is managed and safeguarded in Microsoft Defender for Cloud. ++ Last updated 11/09/2021 # Microsoft Defender for Cloud data security
defender-for-cloud Defender For App Service Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-app-service-introduction.md
Title: Microsoft Defender for App Service - the benefits and features
description: Learn about the capabilities of Microsoft Defender for App Service and how to enable it on your subscription Last updated 11/09/2021 ++ # Protect your web apps and APIs
defender-for-cloud Defender For Cloud Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-cloud-introduction.md
Title: Microsoft Defender for Cloud - an introduction description: Use Microsoft Defender for Cloud to protect your Azure, hybrid, and multi-cloud resources and workloads. ++ Last updated 02/28/2022
defender-for-cloud Defender For Container Registries Cicd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-cicd.md
Title: Defender for Cloud's vulnerability scanner for container images in CI/CD
description: Learn how to scan container images in CI/CD workflows with Microsoft Defender for container registries Last updated 11/09/2021 ++ # Identify vulnerable container images in your CI/CD workflows
defender-for-cloud Defender For Container Registries Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-introduction.md
Title: Microsoft Defender for container registries - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for container registries. Last updated 12/08/2021 ++ # Introduction to Microsoft Defender for container registries (deprecated)
defender-for-cloud Defender For Container Registries Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-container-registries-usage.md
Title: How to use Defender for Containers
description: Learn how to use Defender for Containers to scan Linux images in your Linux-hosted registries Last updated 03/07/2022 ++ # Use Defender for Containers to scan your ACR images for vulnerabilities
defender-for-cloud Defender For Containers Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-enable.md
Title: How to enable Microsoft Defender for Containers in Microsoft Defender for Cloud description: Enable the container protections of Microsoft Defender for Containers ++ zone_pivot_groups: k8s-host Last updated 03/15/2022
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers ++ Last updated 03/15/2022
The following describes the components necessary in order to receive the full pr
## FAQ - Defender for Containers -- [What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?](#what-happens-to-subscriptions-with-microsoft-defender-for-kubernetes-or-microsoft-defender-for-containers-enabled)-- [Is Defender for Containers a mandatory upgrade?](#is-defender-for-containers-a-mandatory-upgrade)-- [Does the new plan reflect a price increase?](#does-the-new-plan-reflect-a-price-increase) - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)
-### What happens to subscriptions with Microsoft Defender for Kubernetes or Microsoft Defender for Containers enabled?
-
-Subscriptions that already have one of these plans enabled can continue to benefit from it.
-
-If you haven't enabled them yet, or create a new subscription, these plans can no longer be enabled.
-
-### Is Defender for Containers a mandatory upgrade?
-
-No. Subscriptions that have either Microsoft Defender for Kubernetes or Microsoft Defender for Containers Registries enabled doesn't need to be upgraded to the new Microsoft Defender for Containers plan. However, they won't benefit from the new and improved capabilities and theyΓÇÖll have an upgrade icon shown alongside them in the Azure portal.
-
-### Does the new plan reflect a price increase?
-No. ThereΓÇÖs no direct price increase. The new comprehensive Container security plan combines Kubernetes protection and container registry image scanning, and removes the previous dependency on the (paid) Defender for Servers plan.
- ### What are the options to enable the new plan at scale? WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
defender-for-cloud Defender For Databases Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-introduction.md
Title: Microsoft Defender for open-source relational databases - the benefits an
description: Learn about the benefits and features of Microsoft Defender for open-source relational databases such as PostgreSQL, MySQL, and MariaDB Last updated 01/17/2022 ++ # Introduction to Microsoft Defender for open-source relational databases
defender-for-cloud Defender For Databases Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-databases-usage.md
Title: Setting up and responding to alerts from Microsoft Defender for open-sour
description: Learn how to configure Microsoft Defender for open-source relational databases to detect anomalous database activities indicating potential security threats to the database. Last updated 11/09/2021 ++ # Enable Microsoft Defender for open-source relational databases and respond to alerts
defender-for-cloud Defender For Dns Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-dns-introduction.md
Title: Microsoft Defender for DNS - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for DNS Last updated 11/09/2021 ++ # Introduction to Microsoft Defender for DNS
defender-for-cloud Defender For Key Vault Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-key-vault-introduction.md
Title: Microsoft Defender for Key Vault - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for Key Vault. Last updated 11/09/2021 ++
defender-for-cloud Defender For Kubernetes Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-kubernetes-introduction.md
Title: Microsoft Defender for Kubernetes - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for Kubernetes. Last updated 03/10/2022 ++ # Introduction to Microsoft Defender for Kubernetes (deprecated)
defender-for-cloud Defender For Resource Manager Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-introduction.md
Title: Microsoft Defender for Resource Manager - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for Resource Manager Last updated 11/09/2021 ++ # Introduction to Microsoft Defender for Resource Manager
defender-for-cloud Defender For Resource Manager Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-resource-manager-usage.md
Title: How to respond to Microsoft Defender for Resource Manager alerts
description: Learn about the steps necessary for responding to alerts from Microsoft Defender for Resource Manager Last updated 11/09/2021 ++ # Respond to Microsoft Defender for Resource Manager alerts
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for servers - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for servers. Last updated 03/08/2022 ++ # Introduction to Microsoft Defender for servers
defender-for-cloud Defender For Sql Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-introduction.md
Title: Microsoft Defender for SQL - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for SQL. Last updated 01/06/2022 ++
defender-for-cloud Defender For Sql On Machines Vulnerability Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-on-machines-vulnerability-assessment.md
Title: Using Microsoft Defender for Cloud's integrated vulnerability assessment scanner for SQL servers description: Learn about Microsoft Defender for SQL servers on machines' integrated vulnerability assessment scanner ++ Last updated 11/09/2021
defender-for-cloud Defender For Sql Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-sql-usage.md
Title: How to set up Microsoft Defender for SQL description: Learn how to enable Microsoft Defender for Cloud's optional Microsoft Defender for SQL plan ++ Last updated 11/09/2021
defender-for-cloud Defender For Storage Exclude https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-exclude.md
Title: Microsoft Defender for Storage - excluding a storage account
description: Excluding a specific storage account from a subscription with Microsoft Defender for Storage enabled. Last updated 02/06/2022 ++ # Exclude a storage account from Microsoft Defender for Storage protections
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features
description: Learn about the benefits and features of Microsoft Defender for Storage. Last updated 01/16/2022 ++ # Introduction to Microsoft Defender for Storage
defender-for-cloud Deploy Vulnerability Assessment Byol Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-byol-vm.md
Title: BYOL VM vulnerability assessment in Microsoft Defender for Cloud description: Deploy a BYOL vulnerability assessment solution on your Azure virtual machines to get recommendations in Microsoft Defender for Cloud that can help you protect your virtual machines. ++ Last updated 11/09/2021
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines ++ Last updated 03/06/2022
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Title: Defender for Cloud's integrated vulnerability assessment solution for Azure, hybrid, and multi-cloud machines description: Install a vulnerability assessment solution on your Azure machines to get recommendations in Microsoft Defender for Cloud that can help you protect your Azure and hybrid machines ++ Last updated 11/16/2021 # Defender for Cloud's integrated Qualys vulnerability scanner for Azure and hybrid machines
defender-for-cloud Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-data-collection.md
Title: Auto-deploy agents for Microsoft Defender for Cloud | Microsoft Docs description: This article describes how to set up auto provisioning of the Log Analytics agent and other agents and extensions used by Microsoft Defender for Cloud ++ Last updated 01/17/2022
defender-for-cloud Enable Enhanced Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enable-enhanced-security.md
Title: Enable Microsoft Defender for Cloud's integrated workload protections description: Learn how to enable enhanced security features to extend the protections of Microsoft Defender for Cloud to your hybrid and multi-cloud resources ++ Last updated 11/09/2021
defender-for-cloud Endpoint Protection Recommendations Technical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/endpoint-protection-recommendations-technical.md
Title: Endpoint protection recommendations in Microsoft Defender for Clouds description: How the endpoint protection solutions are discovered and identified as healthy. ++ Last updated 03/08/2022 # Endpoint protection assessment and recommendations in Microsoft Defender for Cloud
defender-for-cloud Enhanced Security Features Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/enhanced-security-features-overview.md
Title: Understand the enhanced security features of Microsoft Defender for Cloud description: Learn about the benefits of enabling enhanced security in Microsoft Defender for Cloud ++ Last updated 02/24/2022
The enhanced security features are free for the first 30 days. At the end of 30
You can upgrade from the **Environment settings** page, as described in [Quickstart: Enable enhanced security features](enable-enhanced-security.md). For pricing details in your local currency or region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). - ## What are the benefits of enabling enhanced security features? Defender for Cloud is offered in two modes:
defender-for-cloud Exempt Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/exempt-resource.md
Title: Exempt a Microsoft Defender for Cloud recommendation from a resource, subscription, management group, and secure score description: Learn how to create rules to exempt security recommendations from subscriptions or management groups and prevent them from impacting your secure score ++ Last updated 01/02/2022
defender-for-cloud Export To Siem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/export-to-siem.md
Title: Stream your alerts from Microsoft Defender for Cloud to Security Information and Event Management (SIEM) systems and other monitoring solutions description: Learn how to stream your security alerts to Microsoft Sentinel, third-party SIEMs, SOAR, or ITSM solutions ++ Last updated 11/09/2021
defender-for-cloud Features Paas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/features-paas.md
Title: Microsoft Defender for Cloud features for supported Azure PaaS resources. description: This page shows the availability of Microsoft Defender for Cloud features for the supported Azure PaaS resources. ++ Last updated 02/27/2022 # Feature coverage for Azure PaaS services <a name="paas-services"></a>
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Title: File integrity monitoring in Microsoft Defender for Cloud description: Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for Cloud using this walkthrough. ++ Last updated 11/09/2021 # File integrity monitoring in Microsoft Defender for Cloud
defender-for-cloud File Integrity Monitoring Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-usage.md
Title: File Integrity Monitoring in Microsoft Defender for Cloud description: Learn how to compare baselines with File Integrity Monitoring in Microsoft Defender for Cloud. ++ Last updated 11/09/2021
defender-for-cloud Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/get-started.md
Title: Microsoft Defender for Cloud's enhanced security features description: Learn how to enable Microsoft Defender for Cloud's enhanced security features. ++ Last updated 11/09/2021
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
Title: Use Microsoft Defender for Cloud to harden your Docker hosts and protect the containers description: How-to protect your Docker hosts and verify they're compliant with the CIS Docker benchmark ++ Last updated 11/09/2021 # Harden your Docker hosts
defender-for-cloud Implement Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/implement-security-recommendations.md
Title: Implement security recommendations in Microsoft Defender for Cloud | Microsoft Docs description: This article explains how to respond to recommendations in Microsoft Defender for Cloud to protect your resources and satisfy security policies. ++ Last updated 11/09/2021 # Implement security recommendations in Microsoft Defender for Cloud
defender-for-cloud Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/incidents.md
Title: Manage security incidents in Microsoft Defender for Cloud | Microsoft Docs description: This document helps you to use Microsoft Defender for Cloud to manage security incidents. ++ Last updated 11/09/2021 # Manage security incidents in Microsoft Defender for Cloud
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 03/10/2022 Last updated : 03/16/2022 # What's new in Microsoft Defender for Cloud?
Updates in March include:
- [New alert for Microsoft Defender for Storage (preview)](#new-alert-for-microsoft-defender-for-storage-preview) - [Configure email notifications settings from an alert](#configure-email-notifications-settings-from-an-alert) - [Deprecated preview alert: ARM.MCAS_ActivityFromAnonymousIPAddresses](#deprecated-preview-alert-armmcas_activityfromanonymousipaddresses)
+- [Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moved-the-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices)
### Deprecated the recommendations to install the network traffic data collection agent
A new alert has been created that provides this information and adds to it. In a
See more alerts for [Resource Manager](alerts-reference.md#alerts-resourcemanager).
+### Moved the recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices
+
+The recommendation `Vulnerabilities in container security configurations should be remediated` has been moved from the secure score section to best practices section.
+
+The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We are working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
+ ## February 2022 Updates in February include:
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 03/13/2022 Last updated : 03/16/2022 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
|--|--| | [Legacy implementation of ISO 27001 is being replaced with new ISO 27001:2013](#legacy-implementation-of-iso-27001-is-being-replaced-with-new-iso-270012013) | January 2022 | | [Deprecating the recommendation to use service principals to protect your subscriptions](#deprecating-the-recommendation-to-use-service-principals-to-protect-your-subscriptions) | February 2022 |
-| [Moving recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices](#moving-recommendation-vulnerabilities-in-container-security-configurations-should-be-remediated-from-the-secure-score-to-best-practices) | February 2022 |
| [Changes to recommendations for managing endpoint protection solutions](#changes-to-recommendations-for-managing-endpoint-protection-solutions) | March 2022 | | [AWS and GCP recommendations to GA](#aws-and-gcp-recommendations-to-ga) | March 2022 | | [Relocation of custom recommendations](#relocation-of-custom-recommendations) | March 2022 |
Learn more:
- [Overview of Azure Cloud Services (classic)](../cloud-services/cloud-services-choose-me.md) - [Workflow of Windows Azure classic VM Architecture - including RDFE workflow basics](../cloud-services/cloud-services-workflow-process.md)
-### Moving recommendation Vulnerabilities in container security configurations should be remediated from the secure score to best practices
-
-**Estimated date for change:** February 2022
-
-The recommendation for 'Vulnerabilities in container security configurations should be remediated' is being moved from the secure score section to best practices section.
-
-The current user experience only provides the score when all compliance checks have passed. Most customers have difficulties with meeting all the required checks. We are working on an improved experience for this recommendation, and once released the recommendation will be moved back to the secure score.
- ### Changes to recommendations for managing endpoint protection solutions **Estimated date for change:** March 2022
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-handlers.md
Title: Azure Event Grid event handlers description: Describes supported event handlers for Azure Event Grid. Azure Automation, Functions, Event Hubs, Hybrid Connections, Logic Apps, Service Bus, Queue Storage, Webhooks. Previously updated : 09/15/2021 Last updated : 03/15/2022 # Event handlers in Azure Event Grid
An event handler is the place where the event is sent. The handler takes some fu
## Supported event handlers Here are the supported event handlers: -- [Webhooks](handler-webhooks.md). Azure Automation runbooks and Logic Apps are supported via webhooks. -- [Azure functions](handler-functions.md)-- [Event hubs](handler-event-hubs.md)-- [Service Bus queues and topics](handler-service-bus.md)-- [Relay hybrid connections](handler-relay-hybrid-connections.md)-- [Storage queues](handler-storage-queues.md) ## Next steps - For an introduction to Event Grid, see [About Event Grid](overview.md).
event-grid Manage Event Delivery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/manage-event-delivery.md
To set a dead letter location, you need a storage account for holding events tha
> [!NOTE] > - Create a storage account and a blob container in the storage before running commands in this article. > - The Event Grid service creates blobs in this container. The names of blobs will have the name of the Event Grid subscription with all the letters in upper case. For example, if the name of the subscription is My-Blob-Subscription, names of the dead letter blobs will have MY-BLOB-SUBSCRIPTION (myblobcontainer/MY-BLOB-SUBSCRIPTION/2019/8/8/5/111111111-1111-1111-1111-111111111111.json). This behavior is to protect against differences in case handling between Azure services.
+> - The dead letter blobs created will contain one or more events in an array. An important behavior to consider when processing dead letters.
### Azure CLI
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md
Title: What is Azure Event Grid? description: Send event data from a source to handlers with Azure Event Grid. Build event-based applications, and integrate with Azure services. Previously updated : 02/04/2022 Last updated : 03/15/2022 # What is Azure Event Grid?
This article provides an overview of Azure Event Grid. If you want to get starte
Currently, the following Azure services support sending events to Event Grid. For more information about a source in the list, select the link. -- [Azure API Management](event-schema-api-management.md)-- [Azure App Configuration](event-schema-app-configuration.md)-- [Azure App Service](event-schema-app-service.md)-- [Azure Blob Storage](event-schema-blob-storage.md)-- [Azure Cache for Redis](event-schema-azure-cache.md)-- [Azure Communication Services](event-schema-communication-services.md) -- [Azure Container Registry](event-schema-container-registry.md)-- [Azure Event Hubs](event-schema-event-hubs.md)-- [Azure FarmBeats](event-schema-farmbeats.md)-- [Azure IoT Hub](event-schema-iot-hub.md)-- [Azure Key Vault](event-schema-key-vault.md)-- [Azure Kubernetes Service (preview)](event-schema-aks.md)-- [Azure Machine Learning](event-schema-machine-learning.md)-- [Azure Maps](event-schema-azure-maps.md)-- [Azure Media Services](event-schema-media-services.md)-- [Azure Policy](event-schema-policy.md)-- [Azure resource groups](event-schema-resource-groups.md)-- [Azure Service Bus](event-schema-service-bus.md)-- [Azure SignalR](event-schema-azure-signalr.md)-- [Azure subscriptions](event-schema-subscriptions.md) ## Event handlers For full details on the capabilities of each handler as well as related articles, see [event handlers](event-handlers.md). Currently, the following Azure services support handling events from Event Grid:
-* [Azure Automation](handler-webhooks.md#azure-automation)
-* [Azure Functions](handler-functions.md)
-* [Event Hubs](handler-event-hubs.md)
-* [Relay Hybrid Connections](handler-relay-hybrid-connections.md)
-* [Logic Apps](handler-webhooks.md#logic-apps)
-* [Power Automate (Formerly known as Microsoft Flow)](https://preview.flow.microsoft.com/connectors/shared_azureeventgrid/azure-event-grid/)
-* [Service Bus](handler-service-bus.md)
-* [Queue Storage](handler-storage-queues.md)
-* [WebHooks](handler-webhooks.md)
## Concepts
event-grid System Topics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/system-topics.md
A system topic in Event Grid represents one or more events published by Azure se
## Azure services that support system topics Here's the current list of Azure services that support creation of system topics on them. -- [Azure API Management](event-schema-api-management.md)-- [Azure App Configuration](event-schema-app-configuration.md)-- [Azure App Service](event-schema-app-service.md)-- [Azure Blob Storage](event-schema-blob-storage.md)-- [Azure Cache for Redis](event-schema-azure-cache.md)-- [Azure Communication Services](event-schema-communication-services.md) -- [Azure Container Registry](event-schema-container-registry.md)-- [Azure Event Hubs](event-schema-event-hubs.md)-- [Azure FarmBeats](event-schema-farmbeats.md)-- [Azure Health Data Services](event-schema-azure-health-data-services.md)-- [Azure IoT Hub](event-schema-iot-hub.md)-- [Azure Key Vault](event-schema-key-vault.md)-- [Azure Kubernetes Service](event-schema-aks.md)-- [Azure Machine Learning](event-schema-machine-learning.md)-- [Azure Maps](event-schema-azure-maps.md)-- [Azure Media Services](event-schema-media-services.md)-- [Azure Policy](./event-schema-policy.md)-- [Azure resource groups](event-schema-resource-groups.md)-- [Azure Service Bus](event-schema-service-bus.md)-- [Azure SignalR](event-schema-azure-signalr.md)-- [Azure subscriptions](event-schema-subscriptions.md) ## System topics as Azure resources
frontdoor How To Enable Private Link Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app.md
Title: 'Connect Azure Front Door Premium to a web app origin with Private Link'
+ Title: 'Connect Azure Front Door Premium to an app service origin with Private Link'
description: Learn how to connect your Azure Front Door Premium to a webapp privately.
Last updated 02/18/2021
-# Connect Azure Front Door Premium to a Web App origin with Private Link
+# Connect Azure Front Door Premium to a App Service origin with Private Link
-This article will guide you through how to configure Azure Front Door Premium SKU to connect to your Web App privately using the Azure Private Link service.
+This article will guide you through how to configure Azure Front Door Premium SKU to connect to your App Service privately using the Azure Private Link service.
## Prerequisites
This article will guide you through how to configure Azure Front Door Premium SK
Sign in to the [Azure portal](https://portal.azure.com).
-## Enable Private Link to a Web App in Azure Front Door Premium
+## Enable Private Link to an App Service in Azure Front Door Premium
In this section, you'll map the Private Link service to a private endpoint created in Azure Front Door's private network. 1. Within your Azure Front Door Premium profile, under *Settings*, select **Origin groups**.
-1. Select the origin group that contains the Web App origin you want to enable Private Link for.
+1. Select the origin group that contains the App Service origin you want to enable Private Link for.
-1. Select **+ Add an origin** to add a new web app origin or select a previously created web app origin from the list.
+1. Select **+ Add an origin** to add a new app service origin or select a previously created app service origin from the list.
:::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-web-app.png" alt-text="Screenshot of enabling private link to a Web App.":::
In this section, you'll map the Private Link service to a private endpoint creat
1. Then select **Add** to save your configuration.
-## Approve Azure Front Door Premium private endpoint connection from Web App
+## Approve Azure Front Door Premium private endpoint connection from App Service
-1. Go to the Web App you configure Private Link for in the last section. Select **Networking** under **Settings**.
+1. Go to the App Service you configured Private Link for in the last section. Select **Networking** under **Settings**.
1. In **Networking**, select **Configure your private endpoint connections**.
In this section, you'll map the Private Link service to a private endpoint creat
:::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-pending-approval.png" alt-text="Screenshot of pending private endpoint request.":::
-1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your web app from Azure Front Door Premium. Direct access to the Web App from the public internet gets disabled after private endpoint gets enabled.
+1. Once approved, it should look like the screenshot below. It will take a few minutes for the connection to fully establish. You can now access your app service from Azure Front Door Premium. Direct access to the App Service from the public internet gets disabled after private endpoint gets enabled.
:::image type="content" source="../media/how-to-enable-private-link-web-app/private-endpoint-approved.png" alt-text="Screenshot of approved endpoint request.":::-
-## Next steps
-
-Learn about [Private Link service with Azure Web App](../../app-service/networking/private-endpoint.md).
governance New Zealand Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md
Title: Regulatory Compliance details for New Zealand ISM Restricted
-description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
+description: New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated 03/10/2022
initiative definition, open **Policy** in the Azure portal and select the **Defi
Then, find and select the **New Zealand ISM Restricted** Regulatory Compliance built-in initiative definition.
-This built-in initiative is deployed as part of the
-[New Zealand ISM Restricted blueprint sample](../../blueprints/samples/new-zealand-ism.md).
- > [!IMPORTANT] > Each control below is associated with one or more [Azure Policy](../overview.md) definitions. > These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
hdinsight Apache Hadoop Develop Deploy Java Mapreduce Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-develop-deploy-java-mapreduce-linux.md
Save the `pom.xml` file.
notepad src\main\java\org\apache\hadoop\examples\WordCount.java ```
-2. Then copy and paste the java code below into the new file. Then close the file.
+2. Then copy and paste the Java code below into the new file. Then close the file.
```java package org.apache.hadoop.examples;
hdinsight Apache Hadoop Hive Java Udf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-hive-java-udf.md
cd C:\HDI
notepad src/main/java/com/microsoft/examples/ExampleUDF.java ```
- Then copy and paste the java code below into the new file. Then close the file.
+ Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.examples;
hdinsight Apache Hbase Build Java Maven Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/apache-hbase-build-java-maven-linux.md
Enter the command below to create and open a new file `CreateTable.java`. Select
notepad src\main\java\com\microsoft\examples\CreateTable.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.examples;
Enter the command below to create and open a new file `SearchByEmail.java`. Sele
notepad src\main\java\com\microsoft\examples\SearchByEmail.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.examples;
Enter the command below to create and open a new file `DeleteTable.java`. Select
notepad src\main\java\com\microsoft\examples\DeleteTable.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.examples;
hdinsight Hbase Troubleshoot Pegged Cpu Region Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-pegged-cpu-region-server.md
If you are running HBase cluster v3.4, you might have been hit by a potential bu
1. Depending on the data on cluster, it might take a few minutes to up to an hour for the cluster to reach stable state. The way you confirm the cluster reaches stable state is by either checking HMaster UI (all region servers should be active) from Ambari (refresh) or from headnode run HBase shell and then run status command.
-To verify that your upgrade was successful, check that the relevant HBase processes are started using the appropriate java version - for instance for region server check as:
+To verify that your upgrade was successful, check that the relevant HBase processes are started using the appropriate Java version - for instance for region server check as:
``` ps -aux | grep regionserver, and verify the version like '''/usr/lib/jvm/java-8-openjdk-amd64/bin/java
hdinsight Apache Kafka Ssl Encryption Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-ssl-encryption-authentication.md
These steps are detailed in the following code snippets.
The following four steps summarize the tasks needed to complete the client setup: 1. Sign in to the client machine (standby head node).
-1. Create a java keystore and get a signed certificate for the broker. Then copy the certificate to the VM where the CA is running.
+1. Create a Java keystore and get a signed certificate for the broker. Then copy the certificate to the VM where the CA is running.
1. Switch to the CA machine (active head node) to sign the client certificate. 1. Go to the client machine (standby head node) and navigate to the `~/ssl` folder. Copy the signed cert to client machine.
The details of each step are given below.
cd ssl ```
-1. Create a java keystore and create a certificate signing request.
+1. Create a Java keystore and create a certificate signing request.
```bash keytool -genkey -keystore kafka.client.keystore.jks -validity 365 -storepass "MyClientPassword123" -keypass "MyClientPassword123" -dname "CN=HEADNODE1_FQDN" -storetype pkcs12
hdinsight Troubleshoot Debug Wasb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/troubleshoot-debug-wasb.md
A produced log will look similar to:
## Additional logging
-The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting will enable the java sdk logs for wasb storage driver and will print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
+The above logs should provide high-level understanding of the file system operations. If the above logs are still not providing useful information, or if you want to investigate blob storage api calls, add `fs.azure.storage.client.logging=true` to the `core-site`. This setting will enable the Java sdk logs for wasb storage driver and will print each call to blob storage server. Remove the setting after investigations because it could fill up the disk quickly and could slow down the process.
If the backend is Azure Data Lake based, then use the following log4j setting for the component(for example, spark/tez/hdfs):
hdinsight Apache Storm Develop Java Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/storm/apache-storm-develop-java-topology.md
Enter the command below to create and open a new file `RandomSentenceSpout.java`
notepad src\main\java\com\microsoft\example\RandomSentenceSpout.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.example;
Enter the command below to create and open a new file `SplitSentence.java`:
notepad src\main\java\com\microsoft\example\SplitSentence.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.example;
Enter the command below to create and open a new file `WordCount.java`:
notepad src\main\java\com\microsoft\example\WordCount.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.example;
To implement the topology, enter the command below to create and open a new file
notepad src\main\java\com\microsoft\example\WordCountTopology.java ```
-Then copy and paste the java code below into the new file. Then close the file.
+Then copy and paste the Java code below into the new file. Then close the file.
```java package com.microsoft.example;
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/release-notes.md
Previously updated : 01/11/2022 Last updated : 03/15/2022
Azure API for FHIR provides a fully managed deployment of the Microsoft FHIR Server for Azure. The server is an implementation of the [FHIR](https://hl7.org/fhir) standard. This document provides details about the features and enhancements made to Azure API for FHIR.
+## February 2022
+
+### **Features and enhancements**
+
+|Enhancements &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; |Related information |
+| :-- | : |
+|Added 429 retry and logging in BundleHandler |We sometimes encounter 429 errors when processing a bundle. If the FHIR service receives a 429 at the BundleHandler layer, we abort processing of the bundle and skip the remaining resources. We've added an additional retry (in addition to the retry present in the data store layer) that will execute one time per resource that encounters a 429. For more about this feature enhancement, see [PR #2400](https://github.com/microsoft/fhir-server/pull/2400).|
+|Billing for $convert-data and $de-id |Azure API for FHIR's data conversion and de-identified export features are now Generally Available. Billing for $convert-data and $de-id operations in Azure API for FHIR has been enabled. Billing meters were turned on March 1, 2022. |
+
+### **Bug fixes**
+
+|Bug fixes |Related information |
+| :-- | : |
+|Update compartment search index |There was a corner case where the compartment search index wasn't being set on resources. Now we use the same index as the main search for compartment search to ensure all data is being returned. For more about the code fix, see [PR #2430](https://github.com/microsoft/fhir-server/pull/2430).|
++ ## December 2021 ### **Features and enhancements**
healthcare-apis Healthcare Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/healthcare-apis-overview.md
Previously updated : 03/01/2022 Last updated : 03/15/2022
Azure Health Data Services enables you to:
**Linked Services**
-Azure Health Data Services now supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another.
+Azure Health Data Services now supports multiple health data standards for the exchange of structured data. A single collection of Azure Health Data Services enables you to deploy multiple instances of different service types (FHIR, DICOM, and MedTech) that seamlessly work with one another. Services deployed within a workspace also share a compliance boundary and common configuration settings. The product scales automatically to meet the varying demands of your workloads, so you spend less time managing infrastructure and more time generating insights from health data.
**Introducing DICOM service**
-Azure Health Data Services now includes support for DICOM services. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
+Azure Health Data Services now includes support for DICOM service. DICOM enables the secure exchange of image data and its associated metadata. DICOM is the international standard to transmit, store, retrieve, print, process, and display medical imaging information, and is the primary medical imaging standard accepted across healthcare. For more information about the DICOM service, see [Overview of DICOM](./dicom/dicom-services-overview.md).
**Incremental changes to the FHIR Service**
-For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in the Azure API for FHIR.
-* Support for Transactions: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
-* Chained Search Improvements: Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query.
+For the secure exchange of FHIR data, Azure Health Data Services offers a few incremental capabilities that aren't available in Azure API for FHIR.
+
+* **Support for Transactions**: In Azure Health Data Services, the FHIR service supports transaction bundles. For more information about transaction bundles, visit [HL7.org](http://www.hl7.org/) and refer to batch/transaction interactions.
+* [Chained Search Improvements](./././fhir/overview-of-search.md#chained--reverse-chained-searching): Chained Search & Reserve Chained Search are no longer limited by 100 items per sub query.
+* The $convert-data operation can now transform JSON objects to FHIR R4.
+* Events: Trigger new workflows when resources are created, updated, or deleted in a FHIR service.
## Next steps
healthcare-apis Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md
Previously updated : 02/15/2022 Last updated : 03/15/2022 # Release notes: Azure Health Data Services
+>[!Note]
+> Azure Health Data Services is Generally Available.
+>
+>For more information about Azure Health Data Services Service Level Agreements, see [SLA for Azure Health Data Services](https://azure.microsoft.com/support/legal/sla/health-data-services/v1_1/).
+ Azure Health Data Services is a set of managed API services based on open standards and frameworks for the healthcare industry. They enable you to build scalable and secure healthcare solutions by bringing protected health information (PHI) datasets together and connecting them end-to-end with tools for machine learning, analytics, and AI. This document provides details about the features and enhancements made to Azure Health Data Services including the different service types (FHIR service, DICOM service, and IoT connector) that seamlessly work with one another. ## January 2022
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
+
+ Title: Use the REST API to manage organizations in Azure IoT Central
+description: How to use the IoT Central REST API to manage organizations in an application
++ Last updated : 03/08/2022++++++
+# How to use the IoT Central REST API to manage organizations
+
+The IoT Central REST API lets you develop client applications that integrate with IoT Central applications. You can use the REST API to manage organizations in your IoT Central application.
+
+> [!TIP]
+> The [organizations feature](howto-create-organizations.md) is currently available in [preview API](/rest/api/iotcentral/1.1-previewdataplane/users).
+
+Every IoT Central REST API call requires an authorization header. To learn more, see [How to authenticate and authorize IoT Central REST API calls](howto-authorize-rest-api.md).
+
+For the reference documentation for the IoT Central REST API, see [Azure IoT Central REST API reference](/rest/api/iotcentral/).
+
+To learn more about organizations in IoT Central Application, see [Manage IoT Central organizations](howto-create-organizations.md).
+
+## Organizations REST API
+
+The IoT Central REST API lets you:
+
+* Add an organization to your application
+* Get an organization by ID
+* Update an organization in your application
+* Get a list of the organizations in the application
+* Delete an organization in your application
+
+### Create organizations
+
+The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application:
+
+```http
+PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+```
+
+* organizationId - Unique ID of the organization
+
+The following example shows a request body that adds an organization to a IoT Central application.
+
+```json
+{
+ "displayName": "Seattle",
+}
+```
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the organization.
+
+The request body has some optional fields:
+
+* `@parent`: ID of the parent of the organization.
+
+ If you don't specify a parent, then the organization gets the default top-level organization as its parent.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "seattle",
+ "displayName": "Seattle"
+}
+```
+
+You can create an organization with hierarchy, for example you can create a sales organization with a parent organization.
+
+The following example shows a request body that adds an organization to a IoT Central application.
+
+```json
+{
+ "displayName": "Sales",
+ "parent":"seattle"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "sales",
+ "displayName": "Sales",
+ "parent":"Seattle"
+}
+```
+++
+### Get an organization
+
+Use the following request to retrieve details of an individual organization from your application:
+
+```http
+GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "seattle",
+ "displayName": "Seattle",
+ "parent": "washington"
+}
+```
+
+### Update an organization
+
+Use the following request to update details of an organization in your application:
+
+```http
+PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=1.1-preview
+```
+
+The following example shows a request body that updates an organization.
+
+```json
+{
+ "id": "seattle",
+ "displayName": "Seattle Sales",
+ "parent": "washington"
+}
+```
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "seattle",
+ "displayName": "Seattle Sales",
+ "parent": "washington"
+}
+```
+
+### List organizations
+
+Use the following request to retrieve a list of organizations from your application:
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations?api-version=1.1-preview
+```
+
+The response to this request looks like the following example.
+
+```json
+{
+ "value": [
+ {
+ "id": "washington",
+ "displayName": "Washington"
+ },
+ {
+ "id": "redmond",
+ "displayName": "Redmond"
+ },
+ {
+ "id": "bellevue",
+ "displayName": "Bellevue"
+ },
+ {
+ "id": "spokane",
+ "displayName": "Spokane",
+ "parent": "washington"
+ },
+ {
+ "id": "seattle",
+ "displayName": "Seattle",
+ "parent": "washington"
+ }
+ ]
+}
+```
+
+ The organizations Washington, Redmond, and Bellevue will automatically have the application's default top-level organization as their parent.
+
+### Delete an organization
+
+Use the following request to delete an organization:
+
+```http
+DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=1.1-preview
+```
+
+## Next steps
+
+Now that you've learned how to manage organizations with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage data exports.](howto-manage-data-export-with-rest-api.md)
iot-central Howto Manage Users Roles With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-users-roles-with-rest-api.md
DELETE https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-v
## Next steps
-Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage data exports.](howto-manage-data-export-with-rest-api.md)
+Now that you've learned how to manage users and roles with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage organizations.](howto-manage-organizations-with-rest-api.md)
iot-edge Development Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/development-environment.md
When you're ready to develop at-scale solutions for extensive production scenari
For more information, guidance, and examples, see the following pages: * [Continuous integration and continuous deployment to Azure IoT Edge](how-to-continuous-integration-continuous-deployment.md)
-* [Create a CI/CD pipeline for IoT Edge with Azure DevOps Starter](how-to-devops-starter.md)
* [IoT Edge DevOps GitHub repo](https://github.com/toolboc/IoTEdge-DevOps)
iot-edge How To Collect And Transport Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-collect-and-transport-metrics.md
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in metrics integration. To enable this capability on your device, add the metrics-collector module to your deployment and configure it to collect and transport module metrics to Azure Monitor.
+> [!VIDEO https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee]
+
+<a href="https://aka.ms/docs/player?id=94a7d988-4a35-4590-9dd8-a511cdd68bee" target="_blank">IoT Edge integration with Azure Monitor</a>(4:06)
+ ## Architecture # [IoT Hub](#tab/iothub)
You can remotely monitor your IoT Edge fleet using Azure Monitor and built-in me
## Metrics collector module
-A Microsoft-supplied metrics-collector module can be added to an IoT Edge deployment to collect module metrics and send them to Azure Monitor. The module code is open-source and available in the [IoT Edge GitHub repo](https://github.com/Azure/iotedge/tree/release/1.1/edge-modules/azure-monitor).
+A Microsoft-supplied metrics-collector module can be added to an IoT Edge deployment to collect module metrics and send them to Azure Monitor. The module code is open-source and available in the [IoT Edge GitHub repo](https://github.com/Azure/iotedge/tree/release/1.1/edge-modules/metrics-collector).
The metrics-collector module is provided as a multi-arch Docker container image that supports Linux X64, ARM32, ARM64, and Windows X64 (version 1809). It's publicly available at **[`mcr.microsoft.com/azureiotedge-metrics-collector`](https://aka.ms/edgemon-metrics-collector)**.
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
Use this sample command on the downstream device to test that it can connect to
openssl s_client -connect mygateway.contoso.com:8883 -CAfile <CERTDIR>/certs/azure-iot-test-only.root.ca.cert.pem -showcerts ```
-This command tests connections over MQTTS (port 8883). If you're using a different protocol, adjust the command as necessary for AMQPS (5671) or HTTPS (433)
+This command tests connections over MQTTS (port 8883). If you're using a different protocol, adjust the command as necessary for AMQPS (5671) or HTTPS (443)
The output of this command may be long, including information about all the certificates in the chain. If your connection is successful, you'll see a line like `Verification: OK` or `Verify return code: 0 (ok)`.
iot-edge How To Continuous Integration Continuous Deployment Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment-classic.md
This pipeline is now configured to run automatically when you push new code to y
## Next steps
-* IoT Edge DevOps sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md) * Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-edge How To Continuous Integration Continuous Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-continuous-integration-continuous-deployment.md
Continue to the next section to build the release pipeline.
## Next steps
-* IoT Edge DevOps sample in [Azure DevOps Starter for IoT Edge](how-to-devops-starter.md)
* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md) * Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-edge How To Devops Starter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-devops-starter.md
- Title: CI/CD pipeline with Azure DevOps Starter - Azure IoT Edge | Microsoft Docs
-description: Azure DevOps Starter makes it easy to get started on Azure. It helps you launch an Azure IoT Edge app of your choice in few quick steps.
-- Previously updated : 08/25/2020-----
-# Create a CI/CD pipeline for IoT Edge with Azure DevOps Starter
--
-Configure continuous integration (CI) and continuous delivery (CD) for your IoT Edge application with DevOps Projects. DevOps Starter simplifies the initial configuration of a build and release pipeline in Azure Pipelines.
-
-If you don't have an active Azure subscription, create a [free account](https://azure.microsoft.com/free) before you begin.
-
-## Sign in to the Azure portal
-
-DevOps Starter creates a CI/CD pipeline in Azure DevOps. You can create a new Azure DevOps organization or use an existing organization. DevOps Starter also creates Azure resources in the Azure subscription of your choice.
-
-1. Sign in to the [Microsoft Azure portal](https://portal.azure.com).
-
-1. In the left pane, select **Create a resource**, and then search for **DevOps Starter**.
-
-1. Select **Create**.
-
-1. By default, the DevOps Starter is set up with GitHub. To utilize the features in this how-to, switch the DevOps Starter to set up using Azure DevOps. Follow the **change settings here** link.
-
- ![Select change settings here to switch from GitHub to Azure DevOps](./media/how-to-devops-starter/create-with-github-change-settings.png)
-
-1. In the right pane, choose the **Azure DevOps** tile, and select **Done**.
-
- ![Select Azure DevOps to set up your DevOps Starter](./media/how-to-devops-starter/select-azure-devops.png)
-
- You should now see that the DevOps Starter is setting up with Azure DevOps.
-
-## Create a new application pipeline
-
-1. Your Azure IoT Edge module(s) can be written in [C#](tutorial-csharp-module.md), [Node.js](tutorial-node-module.md), [Python](tutorial-python-module.md), [C](tutorial-c-module.md) and [Java](tutorial-java-module.md). Select your preferred language to start a new application: **.NET**, **Node.js**, **Python**, **C**, or **Java**. Select **Next** to continue.
-
- ![Select language to create a new application](./media/how-to-devops-starter/select-language.png)
-
-2. Select **Simple IoT** as your application framework, and then select **Next**.
-
- ![Select Simple IoT framework](media/how-to-devops-starter/select-iot.png)
-
-3. Select **IoT Edge** as the Azure service that deploys your application, and then select **Next**.
-
- ![Select IoT Edge service](media/how-to-devops-starter/select-iot-edge.png)
-
-4. Create a new free Azure DevOps organization or choose an existing organization.
-
- 1. Provide a name for your project.
-
- 2. Select your Azure DevOps organization. If you don't have an existing organization, select **Additional settings** to create a new one.
-
- 3. Select your Azure subscription.
-
- 4. Use the IoT Hub name generated by your project name, or provide your own.
-
- 5. Accept the default location, or choose one close to you.
-
- 6. Select **Additional settings** to configure the Azure resources that DevOps Starter creates on your behalf.
-
- 7. Select **Done** to finish creating your project.
-
- ![Name and create project](media/how-to-devops-starter/create-project.png)
-
-After a few minutes, the DevOps Starter dashboard is displayed in the Azure portal. Select your project name to see the progress. You may need to refresh the page. A sample IoT Edge application is set up in a repository in your Azure DevOps organization, a build is executed, and your application is deployed to the IoT Edge device. This dashboard provides visibility into your code repository, the CI/CD pipeline, and your application in Azure.
-
- ![View project in Azure portal](./media/how-to-devops-starter/portal.png)
-
-## Commit code changes and execute CI/CD
-
-DevOps Starter created a Git repository for your project in Azure Repos. In this section, you view the repository and make code changes to your application.
-
-1. To navigate to the repo created for your project, select **Repositories** in the menu of your project dashboard. This link opens a browser tab and the Azure DevOps repository for your new project.
-
- ![View repository generated in Azure Repos](./media/how-to-devops-starter/view-repositories.png)
-
- > [!NOTE]
- > The following steps walk through using the web browser to make code changes. If you want to clone your repository locally instead, select **Clone** from the top right of the window. Use the provided URL to clone your Git repository in Visual Studio Code or your preferred development tool.
-
-2. The repository already contains code for a module called **FilterModule** based on the application language that you chose in the creation process. Open the **modules/FilterModule/module.json** file.
-
- ![Open module.json file in Azure Repos](./media/how-to-devops-starter/open-module-json.png)
-
-3. Notice that this file uses [Azure DevOps build variables](/azure/devops/pipelines/build/variables#build-variables) in the **version** parameter. This configuration ensures that a new version of the module will be created every time a new build runs.
-
-## Examine the CI/CD pipeline
-
-In the previous sections, Azure DevOps Starter automatically configured a full CI/CD pipeline for your IoT Edge application. Now, explore and customize the pipeline as needed. Use the following steps to familiarize yourself with the Azure DevOps build and release pipelines.
-
-1. To view the build pipelines in your DevOps project, select **Build Pipelines** in the menu of your project dashboard. This link opens a browser tab and the Azure DevOps build pipeline for your new project.
-
- ![View build pipelines in Azure Pipelines](./media/how-to-devops-starter/view-build-pipelines.png)
-
-2. Open the automatically generated build pipeline and select **Edit** in the top right.
-
- ![Edit build pipeline](media/how-to-devops-starter/click-edit-button.png)
-
-3. In the panel that opens, you can examine the tasks that occur when your build pipeline runs. The build pipeline performs various tasks, such as fetching sources from the Git repository, building IoT Edge module images, pushing IoT Edge modules to a container registry, and publishing outputs that are used for deployments. To learn more about Azure IoT Edge tasks in Azure DevOps, see [Configure Azure Pipelines for continuous integration](how-to-continuous-integration-continuous-deployment-classic.md#create-a-build-pipeline-for-continuous-integration).
-
-4. Select the **Pipeline** header at the top of the build pipeline to open the pipeline details. Change the name of your build pipeline to something more descriptive.
-
- ![Edit the pipeline details](./media/how-to-devops-starter/edit-build-pipeline.png)
-
-5. Select **Save & queue**, and then select **Save**. It is optional to comment.
-
-6. Select **Triggers** from the build pipeline menu. DevOps Starter automatically created a CI trigger, and every commit to the repository starts a new build. You can optionally choose to include or exclude branches from the CI process.
-
-7. Select **Retention**. Follow the link to redirect you to the project settings, where the retention policies are located. Depending on your scenario, you can specify policies to keep or remove a certain number of builds.
-
-8. Select **History**. The history panel contains an audit trail of recent changes to the build. Azure Pipelines keeps track of any changes that are made to the build pipeline, and it allows you to compare versions.
-
-9. When you're done exploring the build pipeline, navigate to the corresponding release pipeline. Select **Releases** under **Pipelines**, then select **Edit** to view the pipeline details.
-
- ![View release pipeline](media/how-to-devops-starter/release-pipeline.png)
-
-10. Under **Artifacts**, select **Drop**. The source that this artifact watches is the output of the build pipeline you examined in the previous steps.
-
-11. Next to the **Drop** icon, select the **Continuous deployment trigger** that looks like a lightning bolt. This release pipeline has enabled the trigger, which runs a deployment every time there is a new build artifact available. Optionally, you can disable the trigger so that your deployments require manual execution.
-
-12. In the menu for your release pipeline, select **Tasks** then choose the **dev** stage from the dropdown list. DevOps Projects created a release stage for you that creates an IoT hub, creates an IoT Edge device in that hub, deploys the sample module from the build pipeline, and provisions a virtual machine to run as your IoT Edge device. To learn more about Azure IoT Edge tasks for CD, see [Configure Azure Pipelines for continuous deployment](how-to-continuous-integration-continuous-deployment-classic.md#create-a-release-pipeline-for-continuous-deployment).
-
- ![View continuous deployment tasks](media/how-to-devops-starter/choose-release.png)
-
-13. On the right, select **View releases**. This view shows a history of releases.
-
-14. Select the name of a release to view more information about it.
-
-## Clean up resources
-
-You can delete Azure App Service and other related resources that you created when you don't need them anymore. Use the **Delete** functionality on the DevOps Starter dashboard.
-
-## Next steps
-
-* Learn about the Tasks for Azure IoT Edge on Azure DevOps in [Continuous integration and continuous deployment to Azure IoT Edge](how-to-continuous-integration-continuous-deployment.md)
-* Understand the IoT Edge deployment in [Understand IoT Edge deployments for single devices or at scale](module-deployment-monitoring.md)
-* Walk through the steps to create, update, or delete a deployment in [Deploy and monitor IoT Edge modules at scale](how-to-deploy-at-scale.md).
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/security-recommendations.md
# Security recommendations for Azure Internet of Things (IoT) deployment
-This article contains security recommendations for IoT. Implementing these recommendations will help you fulfill your security obligations as described in our shared responsibility model. For more information on what Microsoft does to fulfill service provider responsibilities, read [Shared responsibilities for cloud computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91).
+This article contains security recommendations for IoT. Implementing these recommendations will help you fulfill your security obligations as described in our shared responsibility model. For more information on what Microsoft does to fulfill service provider responsibilities, read [Shared responsibilities for cloud computing](../security/fundamentals/shared-responsibility.md).
Some of the recommendations included in this article can be automatically monitored by Microsoft Defender for IoT, the first line of defense in protecting your resources in Azure. It periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them.
kinect-dk Body Sdk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-download.md
This document provides links to install each version of the Azure Kinect Body Tr
Version | Download --|-
+1.1.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=104015) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.1.1)
1.1.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=102901) 1.0.1 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100942) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.0.1) 1.0.0 | [msi](https://www.microsoft.com/en-us/download/details.aspx?id=100848) [nuget](https://www.nuget.org/packages/Microsoft.Azure.Kinect.BodyTracking/1.0.0)
If the command succeeds, the SDK is ready for use.
## Change log
+### v1.1.1
+* [Feature] Added cmake support to all body tracking samples
+* [Feature] NuGet package returns. Developed new NuGet package that includes Microsoft developed body tracking dlls and headers, and ONNX runtime dependencies. The package no longer includes the NVIDIA CUDA and TRT dependencies. These continue to be included in the MSI package.
+* [Feature] Upgraded to ONNX Runtime v1.10. Recommended NVIDIA driver version is 472.12 (Game Ready) or 472.84 (Studio). There are OpenGL issues with later drivers.
+* [Bug Fix] CPU mode no longer requires NVIDIA CUDA dependencies [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1154)
+* [Bug Fix] Verified samples compile with Visual Studio 2022 and updated samples to use this release [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1250)
+* [Bug Fix] Added const qualifier to APIs [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1365)
+* [Bug Fix] Added check for nullptr handle in shutdown() [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1373)
+* [Bug Fix] Improved dependencies checks [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1510)
+* [Bug Fix] Updated REDIST.TXT file [Link](https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1541)
+* [Bug Fix] Improved DirectML performance [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1546)
+* [Bug Fix] Fixed exception declaration in frame::get_body() [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1573)
+* [Bug Fix] Fixed memory leak [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1576)
+* [Bug Fix] Updated dependencies list [Link] (https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1644)
+ ### v1.1.0 * [Feature] Add support for DirectML (Windows only) and TensorRT execution of pose estimation model. See FAQ on new execution environments. * [Feature] Add `model_path` to `k4abt_tracker_configuration_t` struct. Allows users to specify the pathname for pose estimation model. Defaults to `dnn_model_2_0_op11.onnx` standard pose estimation model located in the current directory.
kinect-dk Body Sdk Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/body-sdk-setup.md
ms.prod: kinect-dk Last updated 06/26/2019
-keywords: kinect, azure, sensor, access, depth, sdk, body, tracking, joint, setup, cuda, nvidia
+keywords: kinect, azure, sensor, access, depth, sdk, body, tracking, joint, setup, onnx, directml, cuda, trt, nvidia
#Customer intent: As an Azure Kinect DK developer, I want to set up Azure Kinect body tracking.
If everything is set up correctly, a window with a 3D point cloud and tracked bo
![Body Tracking 3D Viewer](./media/quickstarts/samples-simple3dviewer.png)
+## Specifying ONNX Runtime execution environment
+
+The Body Tracking SDK supports CPU, CUDA, DirectML (Windows only) and TensorRT execution environments to inference the pose estimation model. The `K4ABT_TRACKER_PROCESSING_MODE_GPU` defaults to CUDA execution on Linux and DirectML execution on Windows. Three additional modes have been added to select specific execution environments: `K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA`, `K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML`, and `K4ABT_TRACKER_PROCESSING_MODE_GPU_TENSORRT`.
+
+> [!NOTE]
+> ONNX Runtime displays warnings for opcodes that are not accelerated. These may be safely ignored.
+
+ONNX Runtime includes environment variables to control TensorRT model caching. The recommended values are:
+- ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
+- ORT_TENSORRT_CACHE_PATH="pathname"
+
+The folder must be created prior to starting body tracking.
+
+> [!IMPORTANT]
+> TensorRT pre-processes the model prior to inference resulting in extended start up times when compared to other execution environments. Engine caching limits this to first execution however it is experimental and is specific to the model, ONNX Runtime version, TensorRT version and GPU model.
+
+The TensorRT execution environment supports both FP32 (default) and FP16. FP16 trades ~2x performance increase for minimal accuracy decrease. To specify FP16:
+- ORT_TENSORRT_FP16_ENABLE=1
+
+## Required DLLs for ONNX Runtime execution environments
+
+|Mode | ORT 1.10 | CUDA 11.4.3 | CUDNN 8.2.2.26 | TensorRT 8.0.3.4 |
+|-|--|-|||
+| CPU | msvcp140 | - | - | - |
+| | onnxruntime | | | |
+| CUDA | msvcp140 | cudart64_110 | cudnn64_8 | - |
+| | onnxruntime | cufft64_10 | cudnn_ops_infer64_8 | |
+| | onnxruntime_providers_cuda | cublas64_11 | cudnn_cnn_infer64_8 | |
+| | onnxruntime_providers_shared | cublasLt64_11 | | |
+| DirectML | msvcp140 | - | - | - |
+| | onnxruntime | | | |
+| | directml | | | |
+| TensorRT | msvcp140 | cudart64_110 | - | nvinfer |
+| | onnxruntime | cufft64_10 | | nvinfer_plugin |
+| | onnxruntime_providers_cuda | cublas64_11 | | |
+| | onnxruntime_providers_shared | cublasLt64_11 | | |
+| | onnxruntime_providers_tensorrt | nvrtc64_112_0 | | |
+| | | nvrtc-builtins64_114 | | |
+ ## Examples You can find the examples about how to use the body tracking SDK [here](https://github.com/microsoft/Azure-Kinect-Samples/tree/master/body-tracking-samples).
kinect-dk Reset Azure Kinect Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/reset-azure-kinect-dk.md
You may encounter a situation in which you have to reset your Azure Kinect DK ba
1. Power off your Azure Kinect DK. To do this, remove the USB cable and power cable. ![A diagram that shows the location of the screw that covers the reset button.](media/reset-azure-kinect-dk-diagram.png)
-1. To find the reset button, remove the screw that's located in the tripod mount lock.
-1. Reconnect the power cable.
-1. Insert the tip of a straightened paperclip into the empty screw hole, in the tripod mount lock.
-1. Use the paperclip to gently press and hold the reset button.
-1. While you hold the reset button, reconnect the USB cable.
-1. After about 3 seconds, the power indicator light changes to amber. After the light changes, release the reset button.
+2. To find the reset button, remove the screw that's located in the tripod mount lock.
+3. Reconnect the power cable.
+4. Insert the tip of a straightened paperclip into the empty screw hole, in the tripod mount lock.
+ >[!CAUTION]
+ >Never use a sharp ended tool like a pushpin to press the reset button. Instead use a flat ended tool like a paperclip to avoid damaging the reset button.
+
+5. Use the paperclip to gently press and hold the reset button.
+6. While you hold the reset button, reconnect the USB cable.
+7. After about 3 seconds, the power indicator light changes to amber. After the light changes, release the reset button.
After you release the reset button, the power indicator light blinks white and amber while the device resets.
-1. Wait for the power indicator light to become solid white.
-1. Replace the screw in the tripod mount lock, over the reset button.
-1. Use Azure Kinect Viewer to verify that the firmware was reset. To do this, launch the [Azure Kinect Viewer](azure-kinect-viewer.md), and then select **Device firmware version info** to see the firmware version that is installed on your Azure Kinect DK.
+8. Wait for the power indicator light to become solid white.
+9. Replace the screw in the tripod mount lock, over the reset button.
+10. Use Azure Kinect Viewer to verify that the firmware was reset. To do this, launch the [Azure Kinect Viewer](azure-kinect-viewer.md), and then select **Device firmware version info** to see the firmware version that is installed on your Azure Kinect DK.
Always make sure that you have the latest firmware installed on the device. To get the latest firmware version, use the Azure Kinect Firmware Tool. For more information about how to check your firmware status, see [Check device firmware version](azure-kinect-firmware-tool.md#check-device-firmware-version). + ## Related topics - [About Azure Kinect DK](about-azure-kinect-dk.md)
kinect-dk Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kinect-dk/troubleshooting.md
The Sensor SDK C# documentation is located [here](https://microsoft.github.io/Az
The Body Tracking SDK C# documentation is located [here](https://microsoft.github.io/Azure-Kinect-Body-Tracking/release/1.x.x/namespace_microsoft_1_1_azure_1_1_kinect_1_1_body_tracking.html).
-## Specifying ONNX Runtime execution environment
-
-The Body Tracking SDK supports CPU, CUDA, DirectML (Windows only) and TensorRT execution environments to inference the pose estimation model. The `K4ABT_TRACKER_PROCESSING_MODE_GPU` defaults to CUDA execution on Linux and DirectML execution on Windows. Three additional modes have been added to select specific execution environments: `K4ABT_TRACKER_PROCESSING_MODE_GPU_CUDA`, `K4ABT_TRACKER_PROCESSING_MODE_GPU_DIRECTML`, and `K4ABT_TRACKER_PROCESSING_MODE_GPU_TENSORRT`.
-
-> [!NOTE]
-> ONNX Runtime displays warnings for opcodes that are not accelerated. These may be safely ignored.
-
-ONNX Runtime includes environment variables to control TensorRT model caching. The recommended values are:
-- ORT_TENSORRT_ENGINE_CACHE_ENABLE=1 -- ORT_TENSORRT_CACHE_PATH="pathname"-
-The folder must be created prior to starting body tracking.
-
-> [!IMPORTANT]
-> TensorRT pre-processes the model prior to inference resulting in extended start up times when compared to other execution environments. Engine caching limits this to first execution however it is experimental and is specific to the model, ONNX Runtime version, TensorRT version and GPU model.
-
-The TensorRT execution environment supports both FP32 (default) and FP16. FP16 trades ~2x performance increase for minimal accuracy decrease. To specify FP16:
-- ORT_TENSORRT_FP16_ENABLE=1-
-## Required DLLs for ONNX Runtime execution environments
-
-|Mode | CUDA 11.1 | CUDNN 8.0.5 | TensorRT 7.2.1 |
-|-|-|-|-|
-| CPU | cudart64_110 | cudnn64_8 | - |
-| | cufft64_10 | | |
-| | cublas64_11 | | |
-| | cublasLt64_11 | | |
-| CUDA | cudart64_110 | cudnn64_8 | - |
-| | cufft64_10 | cudnn_ops_infer64_8 | |
-| | cublas64_11 | cudnn_cnn_infer64_8 | |
-| | cublasLt64_11 | | |
-| DirectML | cudart64_110 | cudnn64_8 | - |
-| | cufft64_10 | | |
-| | cublas64_11 | | |
-| | cublasLt64_11 | | |
-| TensorRT | cudart64_110 | cudnn64_8 | nvinfer |
-| | cufft64_10 | cudnn_ops_infer64_8 | nvinfer_plugin |
-| | cublas64_11 | cudnn_cnn_infer64_8 | myelin64_1 |
-| | cublasLt64_11 | | |
-| | nvrtc64_111_0 | | |
-| | nvrtc-builtins64_111 | | |
+## Changes to contents of Body Tracking packages
+
+Both the MSI and NuGet packages no longer include the Microsoft Visual C++ Redistributable Package files. Download the latest package [here](https://docs.microsoft.com/cpp/windows/latest-supported-vc-redist).
+
+The NuGet package is back however it no longer includes Microsoft DirectML, or NVIDIA CUDA and TensorRT files.
## Next steps
-[More support information](support.md)
+[More support information](support.md)
load-balancer Quickstart Load Balancer Standard Public Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-portal.md
description: This quickstart shows how to create a load balancer by using the Az
Previously updated : 08/09/2021 Last updated : 03/16/2022 #Customer intent: I want to create a load balancer so that I can load balance internet traffic to VMs.
# Quickstart: Create a public load balancer to load balance VMs using the Azure portal
-Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and three virtual machines.
+Get started with Azure Load Balancer by using the Azure portal to create a public load balancer and two virtual machines.
## Prerequisites
Get started with Azure Load Balancer by using the Azure portal to create a publi
Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com). --
-# [**Standard SKU**](#tab/option-1-create-load-balancer-standard)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
- ## Create the virtual network
-In this section, you'll create a virtual network and subnet.
+In this section, you'll create a virtual network, subnet, and Azure Bastion host. The virtual network and subnet contains the load balancer and virtual machines. The bastion host is used to securely manage the virtual machines and install IIS to test the load balancer.
1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results. 2. In **Virtual networks**, select **+ Create**.
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
+3. In **Create virtual network**, enter or select the following information in the **Basics** tab:
| **Setting** | **Value** | ||--|
In this section, you'll create a virtual network and subnet.
| Resource Group | Select **Create new**. </br> In **Name** enter **CreatePubLBQS-rg**. </br> Select **OK**. | | **Instance details** | | | Name | Enter **myVNet** |
- | Region | Select **(Europe) West Europe** |
+ | Region | Select **West Europe** |
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
+4. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
5. In the **IP Addresses** tab, enter this information:
In this section, you'll create a virtual network and subnet.
|--|-| | IPv4 address space | Enter **10.1.0.0/16** |
-6. Under **Subnet name**, select the word **default**.
+6. Under **Subnet name**, select the word **default**. If a subnet isn't present, select **+ Add subnet**.
7. In **Edit subnet**, enter this information:
In this section, you'll create a virtual network and subnet.
| Subnet name | Enter **myBackendSubnet** | | Subnet address range | Enter **10.1.0.0/24** |
-8. Select **Save**.
+8. Select **Save** or **Add**.
9. Select the **Security** tab.
In this section, you'll create a virtual network and subnet.
| AzureBastionSubnet address space | Enter **10.1.1.0/27** | | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. | - 11. Select the **Review + create** tab or select the **Review + create** button. 12. Select **Create**.
+
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
-## Create NAT gateway
-
-In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-
-1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
-
-2. In **NAT gateways**, select **+ Create**.
-
-3. In **Create network address translation (NAT) gateway**, enter or select the following information:
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreatePubLBQS-rg**. |
- | **Instance details** | |
- | NAT gateway name | Enter **myNATgateway**. |
- | Availability zone | Select **None**. |
- | Idle timeout (minutes) | Enter **15**. |
-
-4. Select the **Outbound IP** tab or select the **Next: Outbound IP** button at the bottom of the page.
-
-5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-
-6. Enter **myNATGatewayIP** in **Name** in **Add a public IP address**.
-
-7. Select **OK**.
-
-8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
-
-9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
-
-10. Select **myBackendSubnet** under **Subnet name**.
-
-11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
-
-12. Select **Create**.
-
-## <a name="create-load-balancer-resources"></a> Create load balancer
+## Create load balancer
In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
During the creation of the load balancer, you'll configure:
* Frontend IP address * Backend pool * Inbound load-balancing rules
+* Health probe
1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-2. In the **Load balancer** page, select **Create**.
+2. In the **Load balancer** page, select **+ Create**.
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
+3. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
| Setting | Value | | | |
During the creation of the load balancer, you'll configure:
| Resource group | Select **CreatePubLBQS-rg**. | | **Instance details** | | | Name | Enter **myLoadBalancer** |
- | Region | Select **(Europe) West Europe**. |
- | Type | Select **Public**. |
+ | Region | Select **West Europe**. |
| SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
| Tier | Leave the default **Regional**. | :::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true"::: 4. Select **Next: Frontend IP configuration** at the bottom of the page.
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
-6. Enter **LoadBalancerFrontend** in **Name**.
+6. Enter **myFrontend** in **Name**.
7. Select **IPv4** or **IPv6** for the **IP version**.
During the creation of the load balancer, you'll configure:
21. Select **Add**.
-22. Select the **Next: Inbound rules** button at the bottom of the page.
+22. Select **Next: Inbound rules** at the bottom of the page.
23. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
During the creation of the load balancer, you'll configure:
| - | -- | | Name | Enter **myHTTPRule** | | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
+ | Frontend IP address | Select **myFrontend**. |
+ | Backend pool | Select **myBackendPool**. |
| Protocol | Select **TCP**. | | Port | Enter **80**. | | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
| Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. | | Session persistence | Select **None**. | | Idle timeout (minutes) | Enter or select **15**. |
During the creation of the load balancer, you'll configure:
27. Select **Create**. > [!NOTE]
- > In this example we created a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
> For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
-## Create virtual machines
-
-In this section, you'll create three VMs (**myVM1**, **myVM2** and **myVM3**) in three different zones (**Zone 1**, **Zone 2**, and **Zone 3**).
-
-These VMs are added to the backend pool of the load balancer that was created earlier.
+## Create NAT gateway
-1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network.
-2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
-
-3. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
- | Setting | Value |
- |--|-|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **CreatePubLBQS-rg** |
- | **Instance details** | |
- | Virtual machine name | Enter **myVM1** |
- | Region | Select **(Europe) West Europe** |
- | Availability Options | Select **Availability zones** |
- | Availability zone | Select **1** |
- | Image | Select **Windows Server 2019 Datacenter - Gen1** |
- | Azure Spot instance | Leave the default of unchecked. |
- | Size | Choose VM size or take default setting |
- | **Administrator account** | |
- | Username | Enter a username |
- | Password | Enter a password |
- | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None** |
+2. In **NAT gateways**, select **+ Create**.
-4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-
-5. In the Networking tab, select or enter:
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
| Setting | Value |
- |-|-|
- | **Network interface** | |
- | Virtual network | **myVNet** |
- | Subnet | **myBackendSubnet** |
- | Public IP | Select **None**. |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
- | **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
- | **Load balancing settings** |
- | Load-balancing options | Select **Azure load balancing** |
- | Select a load balancer | Select **myLoadBalancer** |
- | Select a backend pool | Select **myBackendPool** |
-
-6. Select **Review + create**.
-
-7. Review the settings, and then select **Create**.
-
-8. Follow the steps 1 through 7 to create two more VMs with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2| VM 3|
- | - | -- ||
- | Name | **myVM2** |**myVM3**|
- | Availability zone | **2** |**3**|
- | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
--
-# [**Basic SKU**](#tab/option-2-create-load-balancer-basic)
-
->[!NOTE]
->Standard SKU load balancer is recommended for production workloads. For more information about SKUs, see **[Azure Load Balancer SKUs](skus.md)**.
-
-## Create the virtual network
-
-In this section, you'll create a virtual network and subnet.
-
-1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
-
-2. In **Virtual networks**, select **+ Create**.
-
-3. In **Create virtual network**, enter or select this information in the **Basics** tab:
-
- | **Setting** | **Value** |
- ||--|
- | **Project Details** | |
- | Subscription | Select your Azure subscription |
- | Resource Group | Select **Create new**. </br> In **Name** enter **CreatePubLBQS-rg**. </br> Select **OK**. |
- | **Instance details** | |
- | Name | Enter **myVNet** |
- | Region | Select **(Europe) West Europe** |
-
-4. Select the **IP Addresses** tab or select the **Next: IP Addresses** button at the bottom of the page.
-
-5. In the **IP Addresses** tab, enter this information:
-
- | Setting | Value |
- |--|-|
- | IPv4 address space | Enter **10.1.0.0/16** |
-
-6. Under **Subnet name**, select the word **default**.
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **CreatePubLBQS-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **West Europe**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
-7. In **Edit subnet**, enter this information:
+4. Select the **Outbound IP** tab or select **Next: Outbound IP** at the bottom of the page.
- | Setting | Value |
- |--|-|
- | Subnet name | Enter **myBackendSubnet** |
- | Subnet address range | Enter **10.1.0.0/27** |
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
-8. Select **Save**.
+6. Enter **myNATgatewayIP** in **Name**.
-9. Select the **Security** tab.
+7. Select **OK**.
-10. Under **BastionHost**, select **Enable**. Enter this information:
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
- | Setting | Value |
- |--|-|
- | Bastion name | Enter **myBastionHost** |
- | AzureBastionSubnet address space | Enter **10.1.1.0/24** |
- | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+10. Select **myBackendSubnet** under **Subnet name**.
-11. Select the **Review + create** tab or select the **Review + create** button.
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
12. Select **Create**. ## Create virtual machines
-In this section, you'll create three VMs (**myVM1**, **myVM2**, and **myVM3**).
+In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1**, and **Zone 2**).
-The three VMs will be added to an availability set named **myAvailabilitySet**.
+These VMs are added to the backend pool of the load balancer that was created earlier.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. 2. In **Virtual machines**, select **+ Create** > **Virtual machine**.
-3. In **Create a virtual machine**, type or select the values in the **Basics** tab:
+3. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
| Setting | Value | |--|-|
The three VMs will be added to an availability set named **myAvailabilitySet**.
| **Instance details** | | | Virtual machine name | Enter **myVM1** | | Region | Select **(Europe) West Europe** |
- | Availability Options | Select **Availability set** |
- | Availability set | Select **Create new**. </br> Enter **myAvailabilitySet** in **Name**. </br> Select **OK** |
- | Image | **Windows Server 2019 Datacenter - Gen1** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **Zone 1** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** |
| Azure Spot instance | Leave the default of unchecked. | | Size | Choose VM size or take default setting | | **Administrator account** | | | Username | Enter a username | | Password | Enter a password | | Confirm password | Reenter password |
- | **Inbound port rules** | |
- | Public inbound ports | Select **None**. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
-5. In the Networking tab, select or enter:
+5. In the Networking tab, select or enter the following information:
| Setting | Value |
- |-|-|
+ | - | -- |
| **Network interface** | | | Virtual network | Select **myVNet** | | Subnet | Select **myBackendSubnet** |
- | Public IP | Select **None** |
- | NIC network security group | Select **Advanced**|
- | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Destination port ranges**, enter **80**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+ | Delete NIC when VM is deleted | Leave the default of **unselected**. |
+ | Accelerated networking | Leave the default of **selected**. |
| **Load balancing** |
- | Place this virtual machine behind an existing load-balancing solution? | Leave the default of unselected. |
-
-6. Select the **Management** tab, or select **Next** > **Management**.
-
-7. In the **Management** tab, select or enter:
-
- | Setting | Value |
- |||
- | **Monitoring** | |
- | Boot diagnostics | Select **Off** |
-
-8. Select **Review + create**.
+ | Place this virtual machine behind an existing load-balancing solution? | Select the check box. |
+ | **Load balancing settings** |
+ | Load-balancing options | Select **Azure load balancer** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+
+6. Select **Review + create**.
-9. Review the settings, and then select **Create**.
-
-10. Follow the steps 1 through 9 to create two more VMs with the following values and all the other settings the same as **myVM1**:
-
- | Setting | VM 2 | VM 3 |
- | - | -- ||
- | Name | **myVM2** |**myVM3**|
- | Availability set| Select **myAvailabilitySet** | Select **myAvailabilitySet**|
- | Network security group | Select the existing **myNSG**| Select the existing **myNSG**|
--
-## Create load balancer
-
-In this section, you create a load balancer that load balances virtual machines.
-
-During the creation of the load balancer, you'll configure:
-
-* Frontend IP address
-* Backend pool
-* Inbound load-balancing rules
-
-1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
-
-2. In the **Load balancer** page, select **+ Create**.
-
-3. In the **Basics** tab of the **Create load balancer** page, enter, or select the following information:
-
- | Setting | Value |
- | | |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **CreatePubLBQS-rg**. |
- | **Instance details** | |
- | Name | Enter **myLoadBalancer** |
- | Region | Select **(Europe) West Europe**. |
- | Type | Select **Public**. |
- | SKU | Select **Basic**. |
-
- :::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/create-basic-load-balancer.png" alt-text="Screenshot of create basic load balancer basics tab." border="true":::
-
-4. Select **Next: Frontend IP configuration** at the bottom of the page.
-
-5. In **Frontend IP configuration**, select **+ Add a frontend IP**.
-
-6. Enter **LoadBalancerFrontend** in **Name**.
-
-7. Select **IPv4** or **IPv6** for the **IP version**.
-
-8. Select **Create new** in **Public IP address**.
-
-9. In **Add a public IP address**, enter **myPublicIP** for **Name**.
-
-10. In **Assignment**, select **Static**.
-
-11. Select **OK**.
-
-12. Select **Add**.
-
-13. Select **Next: Backend pools** at the bottom of the page.
-
-14. In the **Backend pools** tab, select **+ Add a backend pool**.
-
-15. Enter **myBackendPool** for **Name** in **Add backend pool**.
-
-16. Select **myVNet** in **Virtual network**.
-
-17. In **Associated to**, select **Virtual machines**.
-
-18. Select **IPv4** or **IPv6** for **IP version**.
-
-19. In **Virtual machines**, select the blue **+ Add** button.
-
-20. In **Add virtual machines to backend pool**, select the boxes next to **myVM1**, **myVM2**, and **myVM3**.
-
-21. Select **Add**.
-
-22. Select **Add** in **Add backend pool**.
-
-23. Select the **Next: Inbound rules** button at the bottom of the page.
-
-24. In **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+7. Review the settings, and then select **Create**.
-25. In **Add load balancing rule**, enter or select the following information:
+8. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**:
- | Setting | Value |
+ | Setting | VM 2
| - | -- |
- | Name | Enter **myHTTPRule** |
- | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
- | Frontend IP address | Select **LoadBalancerFrontend**. |
- | Protocol | Select **TCP**. |
- | Port | Enter **80**. |
- | Backend port | Enter **80**. |
- | Backend pool | Select **myBackendPool**. |
- | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **HTTP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
- | Session persistence | Select **None**. |
- | Idle timeout (minutes) | Enter or select **15**. |
- | Floating IP | Select **Disabled**. |
+ | Name | **myVM2** |
+ | Availability zone | **Zone 2** |
+ | Network security group | Select the existing **myNSG** |
-26. Select **Add**.
-
-27. Select the blue **Review + create** button at the bottom of the page.
-
-28. Select **Create**.
-- ## Install IIS
During the creation of the load balancer, you'll configure:
2. Select **myVM1**.
-2. On the **Overview** page, select **Connect**, then **Bastion**.
-
-3. Select **Use Bastion**.
+3. On the **Overview** page, select **Connect**, then **Bastion**.
4. Enter the username and password entered during VM creation.
During the creation of the load balancer, you'll configure:
# Add a new htm file that displays server name Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
- ```
+
+ ```
8. Close the Bastion session with **myVM1**.
-9. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2** and **myVM3**.
+9. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**.
## Test the load balancer
-1. In the search box at the top of the page, enter **Load balancer**. Select **Load balancers** in the search results.
+1. In the search box at the top of the page, enter **Public IP**. Select **Public IP addresses** in the search results.
-2. Find the public IP address for the load balancer on the **Overview** page under **Public IP address**.
+2. In **Public IP addresses**, select **myPublicIP**.
-3. Copy the public IP address, and then paste it into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
+3. Copy the item in **IP address**. Paste the public IP into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
:::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/load-balancer-test.png" alt-text="Screenshot of load balancer test":::
When no longer needed, delete the resource group, load balancer, and all related
In this quickstart, you:
-* Created an Azure Standard or Basic Load Balancer
-* Attached 3 VMs to the load balancer.
-* Tested the load balancer.
+* Created an Azure Load Balancer
+* Attached 2 VMs to the load balancer
+* Tested the load balancer
To learn more about Azure Load Balancer, continue to: > [!div class="nextstepaction"]
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
The product group is actively working on resolutions for the following known iss
|Issue |Description |Mitigation | | - |||
-| IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound when no outbound rules are configured | In order to prevent outbound access from this IP, please leverage Outbound rules or a NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
+| IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, please leverage NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion |
| numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, is not respected. Load Balancer health probes will probe up/down immediately after 1 probe regardless of the property's configured value | To reflect the current behavior, please set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
ms.suite: integration Previously updated : 02/03/2022 Last updated : 03/16/2022
The following table lists the operations where you can use either the system-ass
| Operation type | Supported operations | |-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed connector (**Preview**) | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- SQL Server |
+| Managed connector (**Preview**) | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- SQL Server |
||| ### [Standard](#tab/standard)
The following table lists the operations where you can use both the system-assig
| Operation type | Supported operations | |-|-| | Built-in | - HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
-| Managed connector (**Preview**) | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- SQL Server |
+| Managed connector (**Preview**) | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- SQL Server |
|||
These steps show how to use the managed identity with a trigger or action throug
![Screenshot showing the connection name page and single managed identity selected in Consumption.](./media/create-managed-service-identity/single-system-identity-consumption.png)
- * **Multi-authentication**: These connectors support more than one authentication type. From the **Authentication type** list, select **Logic Apps Managed Identity** > **Create**, for example:
+ * **Multi-authentication**: These connectors show multiple authentication types, but you still can select only one type. From the **Authentication type** list, select **Logic Apps Managed Identity** > **Create**, for example:
![Screenshot showing the connection name page and "Logic Apps Managed Identity" selected in Consumption.](./media/create-managed-service-identity/multi-system-identity-consumption.png)
This example shows what the configuration looks like when the logic app enables
If you use an ARM template to automate deployment, and your workflow includes an *API connection*, which is created by a [managed connector](../connectors/managed.md) such as Office 365 Outlook, Azure Key Vault, and so on that uses a managed identity, you have an extra step to take.
-In this scenario, check that the underlying connection resource definition includes the `parameterValueSet` object, which includes the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Otherwise, your ARM deployment won't set up the connection to use the managed identity for authentication, and the connection won't work in your workflow. This requirement applies only to [specific managed connector triggers and actions](#triggers-actions-managed-identity) where you selected the [**Connect with managed identity** option](#authenticate-managed-connector-managed-identity).
-
+In an ARM template, the underlying connector resource definition differs based on whether you have a Consumption or Standard logic app and whether the [connector shows single-authentication or multi-authentication options](#managed-connectors-managed-identity).
+
### [Consumption](#tab/consumption)
-For example, here's the underlying connection resource definition for an Azure Automation action in a Consumption logic app resource that uses a managed identity where the definition includes the `parameterValueSet` object, which has the `name` property set to `managedIdentityAuth` and the `values` property set to an empty object. Also note that the `apiVersion` property is set to `2018-07-01-preview`:
+The following examples apply to Consumption logic apps and show how the underlying connector resource definition differs between a single-authentication connector, such as Azure Automation, and a multi-authentication connector, such as Azure Blob Storage.
+
+#### Single-authentication
+
+This example shows the underlying connection resource definition for an Azure Automation action in a Consumption logic app that uses a managed identity where the definition includes the attributes:
+* The `apiVersion` property is set to `2016-06-01`.
+* The `kind` property is set to `V1` for a Consumption logic app.
+* The `parameterValueType` property is set to `Alternative`.
+
```json { "type": "Microsoft.Web/connections",
- "name": "[variables('automationAccountApiConnectionName')]",
- "apiVersion": "2018-07-01-preview",
+ "name": "[variables('connections_azureautomation_name')]",
+ "apiVersion": "2016-06-01",
"location": "[parameters('location')]", "kind": "V1", "properties": {
For example, here's the underlying connection resource definition for an Azure A
"id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'azureautomation')]" }, "customParameterValues": {},
- "displayName": "[variables('automationAccountApiConnectionName')]",
+ "displayName": "[variables('connections_azureautomation_name')]",
+ "parameterValueType": "Alternative"
+ }
+},
+```
+
+#### Multi-authentication
+
+This example shows the underlying connection resource definition for an Azure Blob Storage action in a Consumption logic app that uses a managed identity where the definition includes the following attributes:
+
+* The `apiVersion` property is set to `2018-07-01-preview`.
+* The `kind` property is set to `V1` for a Consumption logic app.
+* The `parameterValueSet` object includes a `name` property that's set to `managedIdentityAuth` and a `values` property that's set to an empty object.
+
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2018-07-01-preview",
+ "name": "[variables('connections_azureblob_name')]",
+ "location": "[parameters('location')]",
+ "kind": "V1",
+ "properties": {
+ "alternativeParameterValues":{},
+ "api": {
+ "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'azureblob')]"
+ },
+ "customParameterValues": {},
+ "displayName": "[variables('connections_azureblob_name')]",
"parameterValueSet":{ "name": "managedIdentityAuth", "values": {}
For example, here's the underlying connection resource definition for an Azure A
### [Standard](#tab/standard)
-For example, here's the underlying connection resource definition for an Azure Automation action in a Standard logic app resource that uses a managed identity where the definition includes the `parameterValueType` property, which is set to `Alternative`.
+The following examples apply to Standard logic apps and show how the underlying connector resource definition differs between a single-authentication connector, such as Azure Automation, and a multi-authentication connector, such as Azure Blob Storage.
-> [!NOTE]
-> For Standard, the `kind` property is set to `V2`, and the `apiVersion` property is set to `2016-06-01`:
+#### Single-authentication
+
+This example shows the underlying connection resource definition for an Azure Automation action in a Standard logic app that uses a managed identity where the definition includes the following attributes:
+* The `apiVersion` property is set to `2016-06-01`.
+* The `kind` property is set to `V2` for a Standard logic app.
+* The `parameterValueType` property is set to `Alternative`.
+
```json { "type": "Microsoft.Web/connections",
+ "name": "[variables('connections_azureautomation_name')]",
"apiVersion": "2016-06-01",
- "name": "[variables('automationAccountApiConnectionName')]",
"location": "[parameters('location')]", "kind": "V2", "properties": {
- "displayName": "[variables('automationAccountApiConnectionName')]",
- "parameterValueType": "Alternative",
"api": { "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'azureautomation')]"
- }
+ },
+ "customParameterValues": {},
+ "displayName": "[variables('connections_azureautomation_name')]",
+ "parameterValueType": "Alternative"
+ }
+},
+```
+
+#### Multi-authentication
+
+This example shows the underlying connection resource definition for an Azure Blob Storage action in a Standard logic app that uses a managed identity where the definition includes the following attributes:
+
+* The `apiVersion` property is set to `2018-07-01-preview`.
+* The `kind` property is set to `V2` for a Standard logic app.
+* The `parameterValueSet` object includes a `name` property that's set to `managedIdentityAuth` and a `values` property that's set to an empty object.
+
+```json
+{
+ "type": "Microsoft.Web/connections",
+ "apiVersion": "2018-07-01-preview",
+ "name": "[variables('connections_azureblob_name')]",
+ "location": "[parameters('location')]",
+ "kind": "V2",
+ "properties": {
+ "alternativeParameterValues":{},
+ "api": {
+ "id": "[subscriptionResourceId('Microsoft.Web/locations/managedApis', parameters('location'), 'azureblob')]"
+ },
+ "customParameterValues": {},
+ "displayName": "[variables('connections_azureblob_name')]",
+ "parameterValueSet":{
+ "name": "managedIdentityAuth",
+ "values": {}
} }, ```
Following this `Microsoft.Web/connections` resource definition, make sure that y
| Parameter | Description | |--|-|
-| <*connection-name*> | The name for your API connection, for example, `office365` |
+| <*connection-name*> | The name for your API connection, for example, `azureblob` |
| <*object-ID*> | The object ID for your Azure AD identity, previously saved from your app registration | | <*tenant-ID*> | The tenant ID for your Azure AD identity, previously saved from your app registration | |||
logic-apps Logic Apps Securing A Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-securing-a-logic-app.md
The following table identifies the authentication types that are available on th
| [Client Certificate](#client-certificate-authentication) | Azure API Management, Azure App Services, HTTP, HTTP + Swagger, HTTP Webhook | | [Active Directory OAuth](#azure-active-directory-oauth-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook | | [Raw](#raw-authentication) | Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP + Swagger, HTTP Webhook |
-| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p><p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, SQL Server |
+| [Managed identity](#managed-identity-authentication) | **Consumption logic app**: <br><br>- **Built-in**: Azure API Management, Azure App Services, Azure Functions, HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p><p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, Azure Event Hubs, Azure Service Bus, SQL Server <p><p>___________________________________________________________________________________________<p><p>**Standard logic app**: <p><p>- **Built-in**: HTTP, HTTP Webhook <p><p>- **Managed connector** (preview): <p> **Single-authentication**: Azure AD Identity Protection, Azure Automation, Azure Container Instance, Azure Data Explorer, Azure Data Factory, Azure Data Lake, Azure Event Grid, Azure Key Vault, Azure Resource Manager, Microsoft Sentinel, HTTP with Azure AD <p><p> **Multi-authentication**: Azure Blob Storage, Azure Event Hubs, Azure Service Bus, SQL Server |
||| <a name="secure-inbound-requests"></a>
logic-apps Logic Apps Using File Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-using-file-connector.md
Previously updated : 10/08/2020 Last updated : 03/11/2022 # Connect to on-premises file systems with Azure Logic Apps
With Azure Logic Apps and the File System connector, you can create automated ta
- Get file content and metadata. > [!IMPORTANT]
- > The File System connector currently supports only Windows file systems on Windows operating systems.
+ > - The File System connector currently supports only Windows file systems on Windows operating systems.
+ > - The gateway machine and the file server must exist in the same Windows domain.
+ > - Mapped network drives aren't supported.
This article shows how you can connect to an on-premises file system as described by this example scenario: copy a file that's uploaded to Dropbox to a file share, and then send an email. To securely connect and access on-premises systems, logic apps use the [on-premises data gateway](../logic-apps/logic-apps-gateway-connection.md). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). For connector-specific technical information, see the [File System connector reference](/connectors/filesystem/).
marketplace Azure App Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-offer-listing.md
Previously updated : 06/01/2021 Last updated : 03/16/2022 # Configure your Azure application offer listing details
On the **Offer listing** page, under **Marketplace details**, complete the follo
1. The **Name** box is prefilled with the name you entered earlier in the **New offer** dialog box. You can change the name at any time. The name you enter here will be shown to customers as the title of your offer listing. 1. In the **Search results summary** box, enter up to 100 characters of text. This summary is used in the marketplace listing search results. 1. In the **Short description** box, enter up to 256 characters of plain text. This summary will appear on your offerΓÇÖs details page.
-1. In the **Description** box, enter a description for your offer. This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 3,000 characters of text in this box, which includes HTML markup and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
+1. In the **Description** box, enter a description for your offer. This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, which includes HTML markup and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
1. (Optional) In the **Search keywords** boxes, enter up to three search keywords that customers can use to find your offer in the commercial marketplace. You don't need to include the offer **Name** and **Description** because that text is automatically included in search. 1. In the **Privacy policy link** box, enter a link (starting with https) to your organization's privacy policy. You're responsible to ensure your app complies with privacy laws and regulations, and for providing a valid privacy policy.
marketplace Azure Vm Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-offer-listing.md
Previously updated : 11/11/2021 Last updated : 03/15/2022 # Configure virtual machine offer listing details
Offer listing content is not required to be in English as long as the offer desc
The name entered here should be descriptive because it will be the title of your offer listing. This field is autofilled with the name that you entered in the **Offer alias** box when you created the offer. The name: - Can include trademark and copyright symbols.-- Must be 50 characters or less.
+- Must be 200 characters or less.
- Can't include emojis. ### Search results summary
marketplace Azure Vm Plan Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-listing.md
Previously updated : 02/18/2022 Last updated : 03/15/2022 # Configure plan listing for a virtual machine offer
Configure the listing details of the plan. This pane displays specific informati
## Plan name
-This field is automatically filled with the name that you gave your plan when you created it. This name appears on Azure Marketplace as the title of this plan. It is limited to 100 characters.
+This field is automatically filled with the name that you gave your plan when you created it. This name appears on Azure Marketplace as the title of this plan. It is limited to 200 characters.
## Plan summary
Provide a short summary of your plan, not the offer. This summary is limited to
## Plan description
-Describe what makes this software plan unique, and describe any differences between plans within your offer. Describe the plan only, not the offer. The plan description can contain up to 2,000 characters.
+Describe what makes this software plan unique, and describe any differences between plans within your offer. Describe the plan only, not the offer. The plan description can contain up to 3,000 characters.
Select **Save draft** before continuing to the next tab in the left-nav Plan menu, **Pricing and availability**.
marketplace Azure Vm Plan Technical Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-plan-technical-configuration.md
Previously updated : 02/18/2022 Last updated : 03/16/2022 # Technical configuration for a virtual machine offer
Here is a list of properties that can be selected for your VM.
- **Is a network virtual appliance**: Enable this property if this product is a Network Virtual Appliance. A network virtual appliance is a product that performs one or more network functions, such as a Load Balancer, VPN Gateway, Firewall or Application Gateway. Learn more about [network virtual appliances](https://go.microsoft.com/fwlink/?linkid=2155373). -- **Remote desktop or SSH disabled**: Enable this property if virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more about [locked VM images](./azure-vm-certification-faq.yml#locked-down-or-ssh-disabled-offer).
+- **Remote desktop or SSH disabled**: Enable this property if any of the following conditions are true:
+ - Virtual machines deployed with these images don't allow customers to access it using Remote Desktop or SSH. Learn more about [locked VM images](./azure-vm-certification-faq.yml#locked-down-or-ssh-disabled-offer).
+ - Image does not support _sampleuser_ while deploying.
+ - Image has limited access.
+ - Image does not comply with the [Certification Test Tool](azure-vm-image-test.md#use-certification-test-tool-for-azure-certified).
+ - Image requires setup during initial login which causes automation to not connect to the virtual machine.
+ - Image does not support port 22.
- **Requires custom ARM template for deployment**: Enable this property if the images in this plan can only be deployed using a custom ARM template. To learn more see the [Custom templates section of Troubleshoot virtual machine certification](./azure-vm-certification-faq.yml#custom-templates).
marketplace Azure Vm Test Drive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-vm-test-drive.md
Previously updated : 12/06/2021 Last updated : 03/15/2022
Complete your test drive solution by continuing to the next **Test drive** tab i
Provide additional details of your listing and resources for your customers.
-**Description** ΓÇô Describe your test drive, what will be demonstrated, features to explore, objectives for the user to experiment with, and other relevant information to help them determine if your offer is right for them (up to 3,000 characters).
+**Description** ΓÇô Describe your test drive, what will be demonstrated, features to explore, objectives for the user to experiment with, and other relevant information to help them determine if your offer is right for them (up to 5,000 characters).
**Access information** ΓÇô Walk through a scenario for exactly what the customer needs to know to access and use the features throughout the test drive (up to 10,000 characters).
marketplace Create Managed Service Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-managed-service-offer-listing.md
Previously updated : 07/12/2021 Last updated : 03/15/2022 # Configure Managed Service offer listing details
On the **Offer listing** page in Partner Center, provide the information describ
1. The **Name** box is pre-filled with the name you entered earlier in the New offer dialog box, but you can change it at any time. This name will appear as the title of your offer listing on the online store. 2. In the **Search results summary** box, describe the purpose or goal of your offer in 100 characters or less. 3. In the **Short description** field, provide a short description of your offer (up to 256 characters). ItΓÇÖll be displayed on your offer listing in the Azure portal.
-4. In the **Description** field, describe your Managed Service offer. You can enter up to 2,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the offer descriptions](./supported-html-tags.md).
+4. In the **Description** field, describe your Managed Service offer. You can enter up to 5,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the offer descriptions](./supported-html-tags.md).
5. In the **Privacy policy link** box, enter a link (starting with https) to your organization's privacy policy. You're responsible to ensure your offer complies with privacy laws and regulations, and for providing a valid privacy policy. ## Product information links
marketplace Create Managed Service Offer Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/create-managed-service-offer-plans.md
Previously updated : 02/02/2022 Last updated : 03/15/2022 # Create plans for a Managed Service offer
Managed Service offers sold through the Microsoft commercial marketplace must ha
1. On the **Plan overview** tab of your offer in Partner Center, select **+ Create new plan**. 2. In the dialog box that appears, under **Plan ID**, enter a unique plan ID. Use up to 50 lowercase alphanumeric characters, dashes, or underscores. You cannot modify the plan ID after you select **Create**. This ID will be visible to your customers.
-3. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 50 characters. This name will be visible to your customers.
+3. In the **Plan name** box, enter a unique name for this plan. Use a maximum of 200 characters. This name will be visible to your customers.
4. Select **Create**. ## Define the plan listing
marketplace Dynamics 365 Business Central Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/dynamics-365-business-central-offer-listing.md
Previously updated : 11/24/2021 Last updated : 03/15/2022 # Configure Dynamics 365 Business Central offer listing details
Here's an example of how offer information appears in Microsoft AppSource (any l
The **Name** you enter here is shown to customers as the title of the offer. This field is pre-populated with the name you entered for **Offer alias** when you created the offer, but you can change it. The name: - Can include trademark and copyright symbols.-- Must be 50 characters or less.
+- Must be 200 characters or less.
- Can't include emojis. Provide a short description of your offer for the **Search results summary** (up to 100 characters). This description may be used in marketplace search results.
marketplace Iot Edge Offer Listing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/iot-edge-offer-listing.md
Previously updated : 05/21/2021 Last updated : 03/15/2022 # Configure IoT Edge Module offer listing details
Here's an example of how offer information appears in Azure Marketplace (any lis
The **Name** you enter here is shown to customers as the title of the offer. This field is pre-populated with the name you entered for **Offer alias** when you created the offer, but you can change it. The name: - Can include trademark and copyright symbols.-- Must be 50 characters or less.
+- Must be 200 characters or less.
- Can't include emojis. Provide a short description of your offer for the **Search results summary** (up to 100 characters). This description may be used in marketplace search results.
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
For Azure Application Managed Apps plans, the `resourceUri` is the Managed App `
{ "request": [ // list of usage events for the same or different resources of the publisher { // first event
- "resourceUri": "<guid1>", // Unique identifier of the resource against which usage is emitted.
+ "resourceUri": "<fullyqualifiedname>", // Unique identifier of the resource against which usage is emitted.
"quantity": 5.0, // how many units were consumed for the date and hour specified in effectiveStartTime, must be greater than 0 or a double integer "dimension": "dim1", //Custom dimension identifier "effectiveStartTime": "2018-12-01T08:30:14",//Time in UTC when the usage event occurred, from now and until 24 hours back
marketplace Marketplace Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-power-bi.md
Previously updated : 05/26/2021 Last updated : 03/15/2022 # Plan a Power BI App offer
You'll need terms and conditions customers must accept before they can try your
To help create your offer more easily, prepare these items ahead of time. All are required except where noted. -- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 50 characters.
+- **Name** ΓÇô The name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and is limited to 200 characters.
- **Search results summary** ΓÇô The purpose or function of your offer as a single sentence with no line breaks in 100 characters or less. This is used in the commercial marketplace listing(s) search results. - **Description** ΓÇô This description displays in the commercial marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, any required disclosures, and a link to learn more. This text box has rich text editor controls to make your description more engaging. Optionally, use HTML tags for formatting. - **Search keywords** (optional) ΓÇô Up to three search keywords that customers can use to find your offer. Don't include the offer **Name** and **Description**; that text is automatically included in search.
marketplace Pc Saas Fulfillment Webhook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/pc-saas-fulfillment-webhook.md
description: Learn how to implement a webhook on the SaaS service by using the f
Previously updated : 03/15/2022 Last updated : 03/16/2022
The publisher must implement a webhook in the SaaS service to keep the SaaS subs
"publisherId": "contoso", "offerId": "offer2 ", "planId": "gold",
- "quantity": "20",
+ "quantity": 20,
"timeStamp": "2019-04-15T20:17:31.7350641Z", "action": "Reinstate", "status": "InProgress"
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-application-offer.md
Previously updated : 11/11/2021 Last updated : 03/16/2022 # Tutorial: Plan an Azure Application offer
The following screenshot shows how offer information appears in the Azure portal
To help create your offer more easily, prepare some of these items ahead of time. The following items are required unless otherwise noted. -- **Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and must be limited to 50 characters.
+- **Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It cannot contain emojis (unless they are the trademark and copyright symbols) and must be limited to 200 characters.
- **Search results summary**: Describe the purpose or function of your offer as a single sentence, in plain text with no line breaks, in 100 characters or less. This summary is used in the commercial marketplace listing(s) search results. - **Short description**: Provide up to 256 characters of plain text. This summary will appear on your offer's details page. - **Description**: This description will be displayed in the Azure Marketplace listing(s) overview. Consider including a value proposition, key benefits, intended user base, any category or industry associations, in-app purchase opportunities, customer need or pain that the offer addresses, any required disclosures, and a link to learn more.
- This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 3,000 characters of text in this box, which includes HTML markup and spaces. For additional tips, see [Write a great app description](/windows/uwp/publish/write-a-great-app-description) and [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
+ This text box has rich text editor controls that you can use to make your description more engaging. You can also use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, which includes HTML markup and spaces. For additional tips, see [Write a great app description](/windows/uwp/publish/write-a-great-app-description) and [HTML tags supported in the commercial marketplace offer descriptions](supported-html-tags.md).
- **Search keywords** (optional): Provide up to three search keywords that customers can use to find your offer in the online store. For best results, also use these keywords in your description. You don't need to include the offer **Name** and **Description**. That text is automatically included in search. - **Privacy policy link**: The URL for your company's privacy policy. You must provide a valid privacy policy and are responsible for ensuring your app complies with privacy laws and regulations.
marketplace Plan Consulting Service Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-consulting-service-offer.md
Previously updated : 11/30/2021 Last updated : 03/16/2022 # Plan a consulting service offer
When you create your consulting service offer in Partner Center, youΓÇÖll enter
To help create your offer more easily, prepare some of these items ahead of time. The following items are required unless otherwise noted.
-**Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It can't contain emojis (unless they're the trademark and copyright symbols) and must be limited to 50 characters. The name must include the duration and service type of the offer to maximize search engine optimization (SEO). The required format is *Name: Duration + type*. DonΓÇÖt include your company name unless itΓÇÖs also the product name. Here are some examples:
+**Name**: This name will appear as the title of your offer listing in the commercial marketplace. The name may be trademarked. It can't contain emojis (unless they're the trademark and copyright symbols) and must be limited to 200 characters. The name must include the duration and service type of the offer to maximize search engine optimization (SEO). The required format is *Name: Duration + type*. DonΓÇÖt include your company name unless itΓÇÖs also the product name. Here are some examples:
|Don't say |Say | |||
Here are some tips for writing your description:
* If the price of your offer is estimated, explain what variables will determine the final price. * Use industry-specific vocabulary.
-You can use HTML tags to format your description. You can enter up to 2,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](./supported-html-tags.md).
+You can use HTML tags to format your description. You can enter up to 5,000 characters of text in this box, including HTML tags and spaces. For information about HTML formatting, see [HTML tags supported in the commercial marketplace offer descriptions](./supported-html-tags.md).
**Search keywords** (optional): Provide up to three search keywords that customers can use to find your offer in the online stores. You don't need to include the offer **Name** and **Description**.
mysql Howto Migrate Single Flexible Minimum Downtime https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/howto-migrate-single-flexible-minimum-downtime.md
To complete this tutorial, you need:
* To install mysql client or MySQL Workbench (the client tools) on your Azure VM. Ensure that you can connect to both the primary and replica server. For the purposes of this article, mysql client is installed. * To install mydumper/myloader on your Azure VM. For more information, see the article [mydumper/myloader](concepts-migrate-mydumper-myloader.md). * To download and run the sample database script for the [classicmodels](https://www.mysqltutorial.org/wp-content/uploads/2018/03/mysqlsampledatabase.zip) database on the source server.
+* Configure [binlog_expire_logs_seconds](./concepts-server-parameters.md#binlog_expire_logs_seconds) on the source server to ensure that binlogs arenΓÇÖt purged before the replica commit the changes. Post successful cutover you can reset the value.
## Configure networking requirements
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
You can use traffic analytics for NSGs in any of the following supported regions
Japan West Korea Central Korea South
- North Central US
- North Europe
+ North Central US
+ North Europe
:::column-end::: :::column span=""::: Norway East
You can use traffic analytics for NSGs in any of the following supported regions
Southeast Asia Switzerland North Switzerland West
- UAE Central
+ UAE Central
UAE North UK South
- UK West
- USGov Arizona
+ UK West
+ USGov Arizona
:::column-end::: :::column span=""::: USGov Texas
You can use traffic analytics for NSGs in any of the following supported regions
West Central US West Europe West US
- West US 2
- West US 3
+ West US 2
+ West US 3
:::column-end::: :::row-end:::
The Log Analytics workspace must exist in the following regions:
Australia Southeast Brazil South Brazil Southeast
- Canada East
+ Canada East
Canada Central Central India Central US
- China East 2
- China North
- China North 2
+ China East 2
+ China North
+ China North 2
:::column-end::: :::column span=""::: East Asia
The Log Analytics workspace must exist in the following regions:
Germany West Central Japan East Japan West
- Korea Central
- Korea South
+ Korea Central
+ Korea South
North Central US North Europe :::column-end:::
The Log Analytics workspace must exist in the following regions:
Norway East South Africa North South Central US
- South India
+ South India
Southeast Asia Switzerland North Switzerland West
The Log Analytics workspace must exist in the following regions:
UAE North UK South UK West
- USGov Arizona
+ USGov Arizona
:::column-end::: :::column span="":::
- USGov Texas
+ USGov Texas
USGov Virginia USNat East USNat West
The Log Analytics workspace must exist in the following regions:
West Europe West US West US 2
- West US 3
+ West US 3
:::column-end::: :::row-end:::
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[SoftBank]( https://www.softbank.jp/biz/nw/nwp/cloud_access/direct_access_for_az/)|[Azure Network Consulting Service: 1-Week Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/sbmpn.softbank_nw_msp_service_azure); [Azure Assessment Service: 1-Week](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/sbmpn.softbank_msp_service_azure_01?tab=Overview&pub_source=email&pub_status=success)||||| |[TCTS](https://www.tatacommunications-ts.com/index.php)|Azure Migration: 3-Week Assessment||||| |[Tata Communications](https://www.tatacommunications.com/about/our-alliances/microsoft-alliance/)||[Managed Azure ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tata_communications.managed_expressroute?tab=Overview)|[Managed Virtual WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/tata_communications.managed_azure_vwan_for_sdwan?tab=Overview)|||
-|[Tech Mahindra](https://www.techmahindra.com/en-in/network-services/)|[Tech Mahindra End to End Managed Network Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/techm.techm-network-transformstrategy?tab=Overview)|||[Azure Private LTE MSP](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/techm.techm-networking-azureprivate5g?tab=Overview)|
+|[Tech Mahindra](https://www.techmahindra.com/en-in/network-services/)|[Tech Mahindra End to End Managed Network Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/techm.techm-network-transformstrategy?tab=Overview)|||[Azure Private LTE MSP](https://azuremarketplace.microsoft.com/marketplace/apps/techm.private_5g_network)|
|[Telia](https://business.teliacompany.com/global-solutions/Business-Defined-Networking/Hybrid-Networking)|[Azure landing zone: 5-Day workshops](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telia.ps_caf_far_001)||[Telia Cloud First Azure vWAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/telia.telia_cloud_first_azure_vwan?tab=Overview)|[Telia IoT Platform](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/telia.telia_iot_platform?tab=Overview)| |[Vigilant IT](https://vigilant.it/cloud-infrastructure/cloud-management/)|[Azure Health Check: 3-Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/greymatter.azurehealth)|||| |[Vandis](https://www.vandis.com/services/microsoft-azure-practice/)|[Managed NAC With Aruba ClearPass Policy Manager](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_aruba_clearpass?tab=Overview)|[Vandis Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_expressroute?tab=Overview)|[Vandis Managed VWAN Powered by Fortinet](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_fortinet?tab=Overview); [Vandis Managed VWAN Powered by Palo Alto Networks](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_managed_vwan_powered_by_palo_alto_networks?tab=Overview); [Managed VWAN Powered by Barracuda CloudGen WAN](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/vandis.vandis_barracuda_vwan?tab=Overview)|
postgresql Concepts Compare Single Server Flexible Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-compare-single-server-flexible-server.md
Last updated 12/08/2021
# Comparison chart - Azure Database for PostgreSQL Single Server and Flexible Server
-The following table provides a high-level features and capabilities comparisons between Single Server and Flexible Server.
+The following table provides a high-level features and capabilities comparisons between Single Server and Flexible Server. For most new deployments, we recommend using Flexible Server. However, you should consider your own requirements against the comparison table below.
| **Feature / Capability** | **Single Server** | **Flexible Server** | | - | - | - |
purview Concept Best Practices Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-best-practices-automation.md
When to use?
* Custom application development or process automation. ## Streaming (Atlas Kafka)
-Each Azure Purview account comes with a fully managed event hub, accessible via the Atlas Kafka endpoint found via the Azure portal > Azure Purview Account > Properties. Azure Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Azure Purview as they occur.
+Each Azure Purview account comes with an optional fully managed event hub, accessible via the Atlas Kafka endpoint found via the Azure portal > Azure Purview Account > Properties. Azure Purview events can be monitored by consuming messages from the event hub. External systems can also use the event hub to publish events to Azure Purview as they occur.
* **Consume Events** - Azure Purview will send notifications about metadata changes to Kafka topic **ATLAS_ENTITIES**. Applications interested in metadata changes can monitor for these notifications. Supported operations include: `ENTITY_CREATE`, `ENTITY_UPDATE`, `ENTITY_DELETE`, `CLASSIFICATION_ADD`, `CLASSIFICATION_UPDATE`, `CLASSIFICATION_DELETE`. * **Publish Events** - Azure Purview can be notified of metadata changes via notifications to Kafka topic **ATLAS_HOOK**. Supported operations include: `ENTITY_CREATE_V2`, `ENTITY_PARTIAL_UPDATE_V2`, `ENTITY_FULL_UPDATE_V2`, `ENTITY_DELETE_V2`.
When to use?
* [Docs](/python/api/azure-mgmt-purview/?view=azure-python&preserve-view=true) | [PyPi](https://pypi.org/project/azure-mgmt-purview/) azure-mgmt-purview ## Next steps
-* [Azure Purview REST API](/rest/api/purview)
+* [Azure Purview REST API](/rest/api/purview)
purview Concept Self Service Data Access Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-self-service-data-access-policy.md
With self-service data access workflow, data consumers can not only find data as
A default self-service data access workflow template is provided with every Azure Purview account.The default template can be amended to add more approvers and/or set the approver's email address. For more details refer [Create and enable self-service data access workflow](./how-to-workflow-self-service-data-access-hybrid.md).
-Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective, data source. self-service data access Policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md) have to be satisfied.
+Whenever a data consumer requests access to a dataset, the notification is sent to the workflow approver(s). The approver(s) can view the request and approve it either from Azure purview portal or from within the email notification. When the request is approved, a policy is auto-generated and applied against the respective data source. Self-service data access policy gets auto-generated only if the data source is registered for **data use governance**. The pre-requisites mentioned within the [data use governance](./how-to-enable-data-use-governance.md) have to be satisfied.
## Next steps
purview Manage Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/manage-credentials.md
Last updated 02/16/2022-+ # Credentials for source authentication in Azure Purview
If you're using the Azure Purview system-assigned managed identity (SAMI) to set
- [Azure Data Lake Storage Gen2](register-scan-adls-gen2.md#authentication-for-a-scan) - [Azure SQL Database](register-scan-azure-sql-database.md) - [Azure SQL Database Managed Instance](register-scan-azure-sql-database-managed-instance.md#authentication-for-registration)-- [Azure Synapse Analytics](register-scan-azure-synapse-analytics.md#authentication-for-registration)
+- [Azure Synapse Workspace](register-scan-synapse-workspace.md#authentication-for-registration)
+- [Azure Synapse dedicated SQL pools (formerly SQL DW)](register-scan-azure-synapse-analytics.md#authentication-for-registration)
## Grant Azure Purview access to your Azure Key Vault
purview Tutorial Azure Purview Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/tutorial-azure-purview-checklist.md
+
+ Title: Learn about prerequisites to successfully deploy an Azure Purview account
+description: This tutorial lists prerequisites to deploy an Azure Purview account.
+++++ Last updated : 03/15/2022
+# Customer Intent: As a Data and Data Security administrator, I want to deploy Azure Purview as a unified data governance solution.
++
+# Azure Purview deployment checklist
+
+This article lists prerequisites that help you get started quickly on Azure Purview planning and deployment.
+
+|No. |Prerequisite / Action |Required Permission |Additional guidance and recommendations |
+|:|:|:|:|
+|1 | Azure Active Directory Tenant |N/A |An [Azure Active Directory tenant](../active-directory/fundamentals/active-directory-access-create-new-tenant.md) should be associated with your subscription. <ul><li>*Global Administrator* or *Information Protection Administrator* role is required, if you plan to [extend Microsoft 365 Sensitivity Labels to Azure Purview for files and db columns](create-sensitivity-label.md)</li><li> *Global Administrator* or *Power BI Administrator* role is required, if you're planning to [scan Power BI tenants](register-scan-power-bi-tenant.md).</li></ul> |
+|2 |An active Azure Subscription |*Subscription Owner* |An Azure subscription is needed to deploy Azure Purview and its managed resources. If you don't have an Azure subscription, create a [free subscription](https://azure.microsoft.com/free/) before you begin. |
+|3 |Define whether you plan to deploy an Azure Purview with managed Event Hub | N/A |A managed Event Hub is created as part of Azure Purview account creation, see Azure Purview account creation. You can publish messages to the Event Hub kafka topic ATLAS_HOOK and Azure Purview will consume and process it. Azure Purview will notify entity changes to Event Hub kafka topic ATLAS_ENTITIES and user can consume and process it.This quickstart uses the new Azure.Messaging.EventHubs library. |
+|4 |Register the following resource providers: <ul><li>Microsoft.Storage</li><li>Microsoft.EventHub (optional)</li><li>Microsoft.Purview</li></ul> |*Subscription Owner* or custom role to register Azure resource providers (_/register/action_) | [Register required Azure Resource Providers](/azure-resource-manager/management/resource-providers-and-types.md) in the Azure Subscription that is designated for Azure Purview Account. Review [Azure resource provider operations](../role-based-access-control/resource-provider-operations.md). |
+|5 |Update Azure Policy to allow deployment of the following resources in your Azure subscription: <ul><li>Azure Purview</li><li>Azure Storage</li><li>Azure Event Hub (optional)</li></ul> |*Subscription Owner* |Use this step if an existing Azure Policy prevents deploying such Azure resources. If a blocking policy exists and needs to remain in place, please follow our [Azure Purview exception tag guide](create-azure-purview-portal-faq.md) and follow the steps to create an exception for Azure Purview accounts. |
+|6 | Define your network security requirements. | Network and Security architects. |<ul><li> Review [Azure Purview network architecture and best practices](concept-best-practices-network.md) to define what scenario is more relevant to your network requirements. </li><li>If private network is needed, use [Azure Purview Managed IR](catalog-managed-vnet.md) to scan Azure data sources when possible to reduce complexity and administrative overhead. </li></ul> |
+|7 |An Azure Virtual Network and Subnet(s) for Azure Purview private endpoints. | *Network Contributor* to create or update Azure VNet. |Use this step if you're planning to set up[private endpoint connectivity with Azure Purview](catalog-private-link.md): <ul><li>Private endpoints for **ingestion**.</li><li>Private endpoint for Azure Purview **Account**.</li><li>Private endpoint for Azure Purview **Portal**.</li></ul> <br> Deploy [Azure Virtual Network](../virtual-network/quick-create-portal.md) if you need to. |
+|8 |Deploy private endpoint for Azure data sources. |*Network Contributor* to set up Private endpoints for each data source. |perform this step if you're planning to use [Private Endpoint for Ingestion](catalog-private-link-end-to-end.md). |
+|9 |Define whether to deploy new or use existing Azure Private DNS Zones. |Required [Azure Private DNS Zones](catalog-private-link-name-resolution.md) can be created automatically during Purview Account deployment using Subscription Owner / Contributor role |Use this step if you're planning to use Private Endpoint connectivity with Azure Purview. Required DNS Zones for Private Endpoint: <ul><li>privatelink.purview.azure.com</li><li>privatelink.purviewstudio.azure.com</li><li>privatelink.blob.core.windows.net</li><li>privatelink.queue.core.windows.net</li><li>privatelink.servicebus.windows.net</li></ul> |
+|10 |A management machine in your CorpNet or inside Azure VNet to launch Azure Purview Studio. |N/A |Use this step if you're planning to set **Allow Public Network** to **deny** on you Azure Purview Account. |
+|11 |Deploy an Azure Purview Account |Subscription Owner / Contributor |Purview account is deployed with 1 Capacity Unit and will scale up based [on demand](concept-elastic-data-map.md). |
+|12 |Deploy a Managed Integration Runtime and Managed private endpoints for Azure data sources. |*Data source admin* to setup Managed VNet inside Azure Purview. <br> *Network Contributor* to approve managed private endpoint for each Azure data source. |Perform this step if you're planning to use [Managed VNet](catalog-managed-vnet.md). within your Azure Purview account for scanning purposes. |
+|13 |Deploy Self-hosted integration runtime VMs inside your network. |Azure: *Virtual Machine Contributor* <br> On-prem: Application owner |Use this step if you're planning to perform any scans using Self-hosted Integration Runtime. |
+|14 |Create a Self-hosted integration runtime inside Azure Purview. |Data curator <br> VM Administrator or application owner |Use this step if you're planning to use Self-hosted Integration Runtime instead of Managed Integration Runtime or Azure Integration Runtime. <br><br> <br> [download](https://www.microsoft.com/en-us/download/details.aspx?id=39717) |
+|15 |Register your Self-hosted integration runtime | Virtual machine administrator |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **Private Endpoint** to scan to **any** data sources. |
+|16 |Grant Azure RBAC **Reader** role to **Azure Purview MSI** at data sources' Subscriptions |*Subscription owner* or *User Access Administrator* |Use this step if you're planning to register **multiple** or **any** of the following data sources: <ul><li>Azure Blob Storage</li><li>Azure Data Lake Storage Gen1</li><li>Azure Data Lake Storage Gen2</li><li>Azure SQL Database</li><li>Azure SQL Database Managed Instance</li><li>Azure Synapse Analytics</li></ul> |
+|17 |Grant Azure RBAC **Storage Blob Data Reader** role to **Azure Purview MSI** at data sources Subscriptions. |*Subscription owner* or *User Access Administrator* | **Skip** this step if you are using Private Endpoint to connect to data sources. Use this step if you have these data sources:<ul><li>Azure Blob Storage</li><li>Azure Data Lake Storage Gen1</li></ul> |
+|18 |Enable network connectivity to allow AzureServices to access data sources: <br> e.g. Enable "**Allow trusted Microsoft services to access this storage account**". |*Owner* or *Contributor* at Data source |Use this step if **Service Endpoint** is used in your data sources. (Don't use this step if Private Endpoint is used) |
+|19 |Enable **Azure Active Directory Authentication** on **Azure SQL Servers**, **Azure SQL Database Managed Instance** and **Azure Synapse Analytics** |Azure SQL Server Contributor |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** or **Azure Synapse Analytics** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|20 |Grant **Azure Purview MSI** account with **db_datareader** role to Azure SQL databases and Azure SQL Database Managed Instance databases |Azure SQL Administrator |Use this step if you have **Azure SQL DB** or **Azure SQL Database Managed Instance** as data source. **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|21 |Grant Azure RBAC **Storage Blob Data Reader** to **Synapse SQL Server** for staging Storage Accounts |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. |
+|22 |Grant Azure RBAC **Reader** role to **Azure Purview MSI** at **Synapse workspace** resources |Owner or User Access Administrator at data source |Use this step if you have **Azure Synapse Analytics** as data sources. **Skip** this step if you are using Private Endpoint to connect to data sources. |
+|23 |Grant Azure **Purview MSI account** with **db_datareader** role |Azure SQL Administrator |Use this step if you have **Azure Synapse Analytics (Dedicated SQL databases)**. <br> **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|24 |Grant **Azure Purview MSI** account with **sysadmin** role |Azure SQL Administrator |Use this step if you have Azure Synapse Analytics (Serverless SQL databases). **Skip** this step if you are using **Private Endpoint** to connect to data sources. |
+|25 |Create an app registration or service principal inside your Azure Active Directory tenant | Azure Active Directory *Global Administrator* or *Application Administrator* | Use this step if you're planning to perform an scan on a data source using Delegated Auth or [Service Principal](create-service-principal-azure.md).|
+|26 |Create an **Azure Key Vault** and a **Secret** to save data source credentials or service principal secret. |*Contributor* or *Key Vault Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server). <br> Use this step are using **ingestion private endpoints** to scan a data source. |
+|27 |Grant Key **Vault Access Policy** to Azure Purview MSI: **Secret: get/list** |*Key Vault Administrator* |Use this step if you have **on-premises** / **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Vault Access Policy](../key-vault/general/assign-access-policy.md). |
+|28 |Grant **Key Vault RBAC role** Key Vault Secrets User to Azure Purview MSI. | *Owner* or *User Access Administrator* |Use this step if you have **on-premises** or **VM-based data sources** (e.g. SQL Server) <br> Use this step if **Key Vault Permission Model** is set to [Azure role-based access control](../key-vault/general/rbac-guide.md). |
+|29 | Create a new connection to Azure Key Vault from Azure Purview Studio | *Data source admin* | Use this step if you are planing to use any of the following authentication options to scan a data source in Azure Purview: <ul><li>Account key</li><li>Basic Authentication</li><li>Delegated Auth</li><li>SQL Authentication</li><li>Service Principal</li><li>Consumer Key</li></ul>
+|30 |Deploy a private endpoint for Power BI tenant |*Power BI Administrator* <br> *Network contributor* |Use this step if you're planning to register a Power BI tenant as data source and your Azure Purview Purview account is set to **deny public access**. <br> For more information, see [How to configure private endpoints for accessing Power BI](/power-bi/enterprise/service-security-private-links). |
+|31 |Connect Azure Data Factory to Azure Purview from Azure Data Factory Portal. **Manage** -> **Azure Purview**. Select **Connect to a Purview account**. <br> Validate if Azure resource tag **catalogUri** exists in ADF Azure resource. |Azure Data Factory Contributor / Data curator |Use this step if you have **Azure Data Factory**. |
+|32 |Verify if you have at least one **Microsoft 365 required license** in your Azure Active Directory tenant to use sensitivity labels in Azure Purview. |Azure Active Directory *Global Reader* |Perform this step if you're planning in extending **Sensitivity Labels from Microsoft 365 to Azure Purview** <br> |
+|33 |Consent "**Extend labeling to assets in Azure Purview**" |Compliance Administrator <br> Azure Information Protection Administrator |Use this step if you are interested in extending Sensitivity Labels from Microsoft 365 to Azure Purview. <br> Use this step if you are interested in extending **Sensitivity Labels** from Microsoft 365 to Azure Purview. |
+|34 |Create new collections and assign roles in Azure Purview |*Collection admin* | [Create a collection and assign permissions in Azure Purview](/quickstart-create-collection.md). |
+|36 |Register and scan Data Sources in Azure Purview |*Data Source admin* <br> *Data Reader* or *Data Curator* | For more information, see [supported data sources and file types](azure-purview-connector-overview.md) |
+|35 |Grant access to data roles in the organization |*Collection admin* |Provide access to other teams to use Azure Purview: <ul><li> Data curator</li><li>Data reader</li><li>Collection admin</li><li>Data source admin</li><li>Policy Author</li><li>Workflow admin</li></ul> <br> For more information, see [Access control in Azure Purview](catalog-permissions.md). |
+
+## Next steps
+- [Review Azure Purview deployment best practices](./deployment-best-practices.md)
search Search Performance Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-optimization.md
Azure Cognitive Search currently supports Availability Zones for Standard tier o
| US Gov Virginia | April 30, 2021 or later | | West Europe | January 29, 2021 or later | | West US 2 | January 30, 2021 or later |
+| West US 3 | June 02, 2021 or later |
Availability Zones do not impact the [Azure Cognitive Search Service Level Agreement](https://azure.microsoft.com/support/legal/sla/search/v1_0/). You still need 3 or more replicas for query high availability.
To learn more about the pricing tiers and services limits for each one, see [Ser
<!--Image references--> [1]: ./media/search-performance-optimization/geo-redundancy.png [2]: ./media/search-performance-optimization/scale-indexers.png
-[3]: ./media/search-performance-optimization/geo-search-traffic-mgr.png
+[3]: ./media/search-performance-optimization/geo-search-traffic-mgr.png
search Search Query Odata Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-filter.md
Find all hotels with name equal to either 'Sea View motel' or 'Budget hotel' sep
Find all hotels where all rooms have the tag 'wifi' or 'tub': ```odata-filter-expr
- $filter=Rooms/any(room: room/Tags/any(tag: search.in(tag, 'wifi, tub'))
+ $filter=Rooms/any(room: room/Tags/any(tag: search.in(tag, 'wifi, tub')))
``` Find a match on phrases within a collection, such as 'heated towel racks' or 'hairdryer included' in tags.
Find documents that have a word that starts with the letters "lux" in the Descri
- [Filters in Azure Cognitive Search](search-filters.md) - [OData expression language overview for Azure Cognitive Search](query-odata-filter-orderby-syntax.md) - [OData expression syntax reference for Azure Cognitive Search](search-query-odata-syntax-reference.md)-- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
+- [Search Documents &#40;Azure Cognitive Search REST API&#41;](/rest/api/searchservice/Search-Documents)
security Subdomain Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/subdomain-takeover.md
It's often up to developers and operations teams to run cleanup processes to avo
- Delete the DNS record if it's no longer in use, or point it to the correct Azure resource (FQDN) owned by your organization.
+### Clean up DNS pointers or Re-claim the DNS
+
+Upon deletion of the classic cloud service resource, the corresponding DNS is reserved for 7 days. During the reservation period, re-use of the DNS will be forbidden EXCEPT for subscriptions belonging to the AAD tenant of the subscription originally owning the DNS. After the reservation expires, the DNS is free to be claimed by any subscription. By taking DNS reservations, the customer is afforded some time to either 1) clean up any associations/pointers to said DNS or 2) re-claim the DNS in Azure. The DNS name being reserved can be derived by appending the cloud service name to the DNS zone for that cloud.
+
+Public - cloudapp.net
+Mooncake - chinacloudapp.cn
+Fairfax - usgovcloudapp.net
+BlackForest - azurecloudapp.de
+
+i.e. a hosted service in Public named ΓÇ£testΓÇ¥ would have DNS ΓÇ£test.cloudapp.netΓÇ¥
+
+Example:
+Subscription ΓÇÿAΓÇÖ and subscription ΓÇÿBΓÇÖ are the only subscriptions belonging to AAD tenant ΓÇÿABΓÇÖ. Subscription ΓÇÿAΓÇÖ contains a classic cloud service ΓÇÿtestΓÇÖ with DNS name ΓÇÿtest.cloudapp.netΓÇÖ. Upon deletion of the cloud service, a reservation is taken on DNS name ΓÇÿtest.cloudapp.netΓÇÖ. During the 7 day reservation period, only subscription ΓÇÿAΓÇÖ or subscription ΓÇÿBΓÇÖ will be able to claim the DNS name ΓÇÿtest.cloudapp.netΓÇÖ by creating a classic cloud service named ΓÇÿtestΓÇÖ. No other subscriptions will be allowed to claim it. After the 7 days is up, any subscription in Azure can now claim ΓÇÿtest.cloudapp.netΓÇÖ.
++ ## Next steps To learn more about related services and Azure features you can use to defend against subdomain takeover, see the following pages.
sentinel Offboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/offboard.md
If you no longer want to use Microsoft Sentinel, this article explains how to re
Follow this process to remove Microsoft Sentinel from your workspace:
-1. Go to **Microsoft Sentinel**, followed by **Settings**, and select the tab **Remove Microsoft Sentinel**.
+1. From the Microsoft Sentinel navigation menu, under **Configuration**, select **Settings**.
-1. Before you remove Microsoft Sentinel, please use the checkboxes to let us know why you're removing it.
+1. In the **Settings** pane, select the **Settings** tab.
+
+1. Locate and expand the **Remove Microsoft Sentinel** expander (at the bottom of the list of expanders).
+
+ :::image type="content" source="media/offboard/locate-remove-sentinel.png" alt-text="Screenshot to find the setting to remove Microsoft Sentinel from your workspace.":::
+
+1. Read the **Know before you go...** section and the rest of this document carefully, making sure that you understand the implications of removing Microsoft Sentinel, and that you take all the necessary actions before proceeding.
+
+1. Before you remove Microsoft Sentinel, please mark the relevant checkboxes to let us know why you're removing it. Enter any additional details in the space provided, and indicate whether you want Microsoft to email you in response to your feedback.
1. Select **Remove Microsoft Sentinel from your workspace**.
- ![Delete the SecurityInsights solution](media/offboard/delete-solution.png)
+ :::image type="content" source="media/offboard/remove-sentinel-reasons.png" alt-text="Screenshot to remove the Microsoft Sentinel solution from your workspace and specify reasons.":::
## What happens behind the scenes?
After the disconnection is identified, the offboarding process begins.
- AWS -- Microsoft services security alerts: Microsoft Defender for Identity (*formerly Azure ATP*), Microsoft Defender for Cloud Apps including Cloud Discovery Shadow IT reporting, Azure AD Identity Protection, Microsoft Defender for Endpoint (*formerly Microsoft Defender ATP*), security alerts from Microsoft Defender for Cloud
+- Microsoft services security alerts: Microsoft Defender for Identity, Microsoft Defender for Cloud Apps (*formerly Microsoft Cloud App Security*) including Cloud Discovery Shadow IT reporting, Azure AD Identity Protection, Microsoft Defender for Endpoint, security alerts from Microsoft Defender for Cloud (*formerly Azure Defender*)
- Threat Intelligence
After the disconnection is identified, the offboarding process begins.
- Windows Security Events (If you get security alerts from Microsoft Defender for Cloud, these logs will continue to be collected.)
-Within the first 48 hours, the data and analytic rules (including real-time automation configuration) will no longer be accessible or queryable in Microsoft Sentinel.
+Within the first 48 hours, the data and analytics rules (including real-time automation configuration) will no longer be accessible or queryable in Microsoft Sentinel.
**After 30 days these resources are removed:** - Incidents (including investigation metadata) -- Analytic rules
+- Analytics rules
- Bookmarks Your playbooks, saved workbooks, saved hunting queries, and notebooks are not removed. **Some may break due to the removed data. You can remove those manually.**
-After you remove the service, there is a grace period of 30 days during which you can re-enable the solution and your data and analytic rules will be restored but the configured connectors that were disconnected must be reconnected.
+After you remove the service, there is a grace period of 30 days during which you can re-enable the solution. Your data and analytics rules will be restored, but the configured connectors that were disconnected must be reconnected.
> [!NOTE] > If you remove the solution, your subscription will continue to be registered with the Microsoft Sentinel resource provider. **You can remove it manually.**
service-fabric How To Managed Cluster Azure Active Directory Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-azure-active-directory-client.md
+
+ Title: How to configure Azure Service Fabric managed cluster for Azure active directory client access
+description: Learn how to configure an Azure Service Fabric managed cluster for Azure active directory client access
++ Last updated : 03/1/2022++
+# How to configure Azure Service Fabric managed cluster for Active Directory client access
+
+Cluster security is configured when the cluster is first set up and can't be changed later. Before setting up a cluster, read [Service Fabric cluster security scenarios](service-fabric-cluster-security.md). In Azure, Service Fabric uses x509 certificate to secure your cluster and its endpoints, authenticate clients, and encrypt data. Azure Active Directory is also recommended to secure access to management endpoints. Azure AD tenants and users must be created before creating the cluster. For more information, read Set up Azure AD to authenticate clients.
+
+You add the Azure AD configuration to a cluster resource manager template by referencing the key vault that contains the certificate keys. Add those Azure AD parameters and values in a Resource Manager template parameters file (*azuredeploy.parameters.json*).
+
+> [!NOTE]
+> On Azure AD tenants and users must be created before creating the cluster. For more information, read [Set up Azure AD to authenticate clients](service-fabric-cluster-creation-setup-aad.md).
+
+```json
+{
+"type": "Microsoft.ServiceFabric/managedClusters",
+"apiVersion": "2022-01-01",
+"properties": {
+ "azureActiveDirectory": {
+ "tenantId": "[parameters('aadTenantId')]",
+ "clusterApplication": "[parameters('aadClusterApplicationId')]",
+ "clientApplication": "[parameters('aadClientApplicationId')]"
+ },
+ }
+}
+```
+
service-fabric How To Managed Cluster Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-configuration.md
description: Learn how to configure your Service Fabric managed cluster for auto
Last updated 10/25/2021 + # Service Fabric managed cluster configuration options In addition to selecting the [Service Fabric managed cluster SKU](overview-managed-cluster.md#service-fabric-managed-cluster-skus) when creating your cluster, there are a number of other ways to configure it, including:
In addition to selecting the [Service Fabric managed cluster SKU](overview-manag
* Selecting the cluster [managed disk type](how-to-managed-cluster-managed-disk.md) SKU * Configuring cluster [upgrade options](how-to-managed-cluster-upgrades.md) for the runtime updates - ## Next steps [Service Fabric managed clusters overview](overview-managed-cluster.md)
service-fabric Service Fabric Application Upgrade Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-application-upgrade-advanced.md
The overridden delay duration only applies to the invoked upgrade instance and d
> * The settings to drain requests will not be able to prevent the Azure Load balancer from sending new requests to the endpoints which are undergoing drain. > * A complaint based resolution mechanism will not result in graceful draining of requests, as it triggers a service resolution after a failure. As described earlier, this should instead be enhanced to subscribe to the endpoint change notifications using [ServiceNotificationFilterDescription](/dotnet/api/system.fabric.description.servicenotificationfilterdescription). > * The settings are not honored when the upgrade is an impactless one i.e when the replicas will not be brought down during the upgrade.
+> * The max value of InstanceCloseDelayDuration that can be configured in the service description or the InstanceCloseDelayDurationSec in the upgrade description can't be greater than cluster config FailoverManager.MaxInstanceCloseDelayDurationInSeconds, which defaults to 1800 seconds. To update the max value, the cluster level config should to be updated. This configuration is only available in the runtime version 9.0 or later.
> >
service-fabric Service Fabric Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-get-started.md
If you only need the SDK, you can install this package:
The current versions are:
-* Service Fabric SDK and Tools 5.2.1486
-* Service Fabric runtime 8.2.1486
+* Service Fabric SDK and Tools 5.2.1571
+* Service Fabric runtime 8.2.1571
For a list of supported versions, see [Service Fabric versions](service-fabric-versions.md)
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
The tables in this article outline the Service Fabric and platform versions that
| Service Fabric runtime |Can upgrade directly from|Can downgrade to*|Compatible SDK or NuGet package version|Supported .NET runtimes** |OS Version |End of support | | | | | | | | |
+| 8.2 CU2.1<br>8.2.1571.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
| 8.2 CU2<br>8.2.1486.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 8.2 CU1<br>8.2.1363.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version | | 8.2 RTO<br>8.2.1235.9590 | 8.0 CU3<br>8.0.536.9590 | 8.0 | Less than or equal to version 5.2 | .NET 5.0 (GA), >= .NET Core 3.1, <br>All >= .NET Framework 4.5 | [See supported OS version](#supported-windows-versions-and-support-end-date) | Current version |
Support for Service Fabric on a specific OS ends when support for the OS version
| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | |
+| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version | | 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | Current version |
The following table lists the version names of Service Fabric and their correspo
| Version name | Windows version number | Linux version number | | | | |
+| 8.2 CU2.1 | 8.2.1571.9590 | 8.2.1397.1 |
| 8.2 CU2 | 8.2.1486.9590 | 8.2.1285.1 | | 8.2 CU1 | 8.2.1363.9590 | 8.2.1204.1 | | 8.2 RTO | 8.2.1235.9590 | 8.2.1124.1 |
storage Storage Blobs Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-introduction.md
For example, if your storage account is named *mystorageaccount*, then the defau
http://mystorageaccount.blob.core.windows.net ```
-The following table describes the different types of storage accounts support Blob Storage:
+The following table describes the different types of storage accounts that are supported for Blob Storage:
| Type of storage account | Performance tier | Usage | |--|--|--|
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
The following table describes the types of storage accounts recommended by Micro
Legacy storage accounts are also supported. For more information, see [Legacy storage account types](#legacy-storage-account-types).
-You canΓÇÖt change a storage account to a different type after it's' created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
+You canΓÇÖt change a storage account to a different type after it's created. To move your data to a storage account of a different type, you must create a new account and copy the data to the new account.
## Storage account endpoints
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
The key differences between Hadoop and native external tables are presented in t
| External table type | Hadoop | Native | | | | |
-| Dedicated SQL pool | Available | Parquet tables are available in **public preview**. |
+| Dedicated SQL pool | Available | Only Parquet tables are available in **public preview**. |
| Serverless SQL pool | Not available | Available |
-| Supported formats | Delimited/CSV, Parquet, ORC, Hive RC, and RC | Serverless SQL pool: Delimited/CSV, Parquet, and [Delta Lake (preview)](query-delta-lake-format.md)<br/>Dedicated SQL pool: Parquet |
-| Folder partition elimination | No | Only for partitioned tables synchronized from Apache Spark pools in Synapse workspace to serverless SQL pools |
+| Supported formats | Delimited/CSV, Parquet, ORC, Hive RC, and RC | Serverless SQL pool: Delimited/CSV, Parquet, and [Delta Lake](query-delta-lake-format.md)<br/>Dedicated SQL pool: Parquet (preview) |
+| [Folder partition elimination](#folder-partition-elimination) | No | Only for partitioned tables synchronized from Apache Spark pools in Synapse workspace to serverless SQL pools |
+| [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. |
| Custom format for location | Yes | Yes, using wildcards like `/year=*/month=*/day=*` | | Recursive folder scan | No | Only in serverless SQL pools when specified `/**` at the end of the location path |
-| Storage filter pushdown | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns. |
-| Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | Shared Access Signature(SAS), AAD passthrough, Managed identity |
+| Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [AAD passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). |
> [!NOTE]
-> Native external tables on Delta Lake format are in public preview. For more information, see [Query Delta Lake files (preview)](query-delta-lake-format.md). [CETAS](develop-tables-cetas.md) does not support exporting content in Delta Lake format.
+> The native external tables are the recommended solution in the pools where they are generally available. If you need to access external data, always use the native tables in serverless pools. In dedicated pools, you should switch to the native tables for reading Parquet files once they are in GA. Use the Hadoop tables only if you need to access some types that are not supported in native external tables (for example - ORC, RC), or if the native version is not available.
## External tables in dedicated SQL pool and serverless SQL pool
You can use external tables to:
- Import data from Azure Blob Storage and Azure Data Lake Storage and store it in a dedicated SQL pool (only Hadoop tables in dedicated pool). > [!NOTE]
-> When used in conjunction with the [CREATE TABLE AS SELECT](../sql-data-warehouse/sql-data-warehouse-develop-ctas.md?context=/azure/synapse-analytics/context/context) statement, selecting from an external table imports data into a table within the **dedicated** SQL pool. In addition to the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true), external tables are useful for loading data.
+> When used in conjunction with the [CREATE TABLE AS SELECT](../sql-data-warehouse/sql-data-warehouse-develop-ctas.md?context=/azure/synapse-analytics/context/context) statement, selecting from an external table imports data into a table within the **dedicated** SQL pool.
+>
+> If performance of Hadoop external tables in the dedicated pools do not satisfy your performance goals, consider loading external data into the Datawarehouse tables using the [COPY statement](/sql/t-sql/statements/copy-into-transact-sql?view=azure-sqldw-latest&preserve-view=true).
> > For a loading tutorial, see [Use PolyBase to load data from Azure Blob Storage](../sql-data-warehouse/load-data-from-azure-blob-storage-using-copy.md?bc=%2fazure%2fsynapse-analytics%2fbreadcrumb%2ftoc.json&toc=%2fazure%2fsynapse-analytics%2ftoc.json).
You can create external tables in Synapse SQL pools via the following steps:
1. [CREATE EXTERNAL DATA SOURCE](#create-external-data-source) to reference an external Azure storage and specify the credential that should be used to access the storage. 2. [CREATE EXTERNAL FILE FORMAT](#create-external-file-format) to describe format of CSV or Parquet files. 3. [CREATE EXTERNAL TABLE](#create-external-table) on top of the files placed on the data source with the same file format.
+
+### Folder partition elimination
+
+The native external tables in Synapse pools are able to ignore the files placed in the folders that are not relevant for the queries. If your files are stored in a folder hierarchy (for example - **/year=2020/month=03/day=16**) and the values for **year**, **month**, and **day** are exposed as the columns, the queries that contain filters like `year=2020` will read the files only from the subfolders placed within the **year=2020** folder. The files and folders placed in other folders (**year=2021** or **year=2022**) will be ignored in this query. This elimination is known as **partition elimination**.
+
+The folder partition elimination is available in the native external tables that are synchronized from the Synapse Spark pools. If you have partitioned data set and you would like to leverage the partition elimination with the external tables that you create, use [the partitioned views](create-use-views.md#partitioned-views) instead of the external tables.
+
+### File elimination
+
+Some data formats such as Parquet and Delta contain file statistics for each column (for example, min/max values for each column). The queries that filter data will not read the files where the required column values do not exist. The query will first explore min/max values for the columns used in the query predicate to find the files that do not contain the required data. These files will be ignored and eliminated from the query plan.
+This technique is also known as filter predicate pushdown and it can improve the performance of your queries. Filter pushdown is available in the serverless SQL pools on Parquet and Delta formats. To leverage filter pushdown for the string types, use the VARCHAR type with the `Latin1_General_100_BIN2_UTF8` collation.
### Security
External tables access underlying Azure storage using the database scoped creden
- Data source without credential enables external tables to access publicly available files on Azure storage. - Data source can have a credential that enables external tables to access only the files on Azure storage using SAS token or workspace Managed Identity - For examples, see [the Develop storage files storage access control](develop-storage-files-storage-access-control.md#examples) article. ++ ## CREATE EXTERNAL DATA SOURCE External data sources are used to connect to storage accounts. The complete documentation is outlined [here](/sql/t-sql/statements/create-external-data-source-transact-sql?view=azure-sqldw-latest&preserve-view=true).
time-series-insights How To Tsi Gen2 Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/time-series-insights/how-to-tsi-gen2-migration.md
Title: 'Time Series Insights Gen2 migration to Azure Data Explorer | Microsoft Docs' description: How to migrate Azure Time Series Insights Gen 2 environments to Azure Data Explorer. - - Last updated 3/15/2022
For more references, check [ADX Data Partitioning Policy](/azure/data-explorer/k
1. Select Next: Schema > [!NOTE]
- > TSI applies some flattening and escaping when persisting columns in Parquet files. See these links for more details: https://docs.microsoft.com/azure/time-series-insights/concepts-json-flattening-escaping-rules, https://docs.microsoft.com/azure/time-series-insights/ingestion-rules-update.
+ > TSI applies some flattening and escaping when persisting columns in Parquet files. See these links for more details: [flattening and escaping rules](concepts-json-flattening-escaping-rules.md), [ingestion rules updates](ingestion-rules-update.md).
- If schema is unknown or varying 1. Remove all columns that are infrequently queried, leaving at least timestamp and TSID column(s).
The command generated from One-Click tool includes a SAS token. ItΓÇÖs best to g
:::image type="content" source="media/gen2-migration/adx-ingest-sas-blob.png" alt-text="Screenshot of the Azure Data Explorer ingestion for SAS Blob URL" lightbox="media/gen2-migration/adx-ingest-sas-blob.png"::: 1. Go to the LightIngest command that you copied previously. Replace the -source parameter in the command with this ΓÇÿSAS Blob URLΓÇÖ
-1. `Option 1: Ingest All Data`. For smaller environments, you can ingest all of the data with a single command.
+1. **Option 1: Ingest All Data**. For smaller environments, you can ingest all of the data with a single command.
1. Open a command prompt and change to the directory where the LightIngest tool was extracted to. Once there, paste the LightIngest command and execute it. :::image type="content" source="media/gen2-migration/adx-ingest-lightingest-prompt.png" alt-text="Screenshot of the Azure Data Explorer ingestion for command prompt" lightbox="media/gen2-migration/adx-ingest-lightingest-prompt.png":::
-1. `Option 2: Ingest Data by Year or Month`. For larger environments or to test on a smaller data set you can filter the Lightingest command further.
- 1. By Year
- > Change your -prefix parameter
- > Before: -prefix:"V=1/PT=Time"
- > After: -prefix:"V=1/PT=Time/Y=<Year>"
- > Example: -prefix:"V=1/PT=Time/Y=2021"
- 1. By Month
- > Change your -prefix parameter
- > Before: -prefix:"V=1/PT=Time"
- > After: -prefix:"V=1/PT=Time/Y=<Year>/M=<month #>"
- > Example: -prefix:"V=1/PT=Time/Y=2021/M=03"
+1. **Option 2: Ingest Data by Year or Month**. For larger environments or to test on a smaller data set you can filter the Lightingest command further.
+
+ 1. By Year: Change your -prefix parameter
+
+ - Before: `-prefix:"V=1/PT=Time"`
+ - After: `-prefix:"V=1/PT=Time/Y=<Year>"`
+ - Example: `-prefix:"V=1/PT=Time/Y=2021"`
+
+ 1. By Month: Change your -prefix parameter
+
+ - Before: `-prefix:"V=1/PT=Time"`
+ - After: `-prefix:"V=1/PT=Time/Y=<Year>/M=<month #>"`
+ - Example: `-prefix:"V=1/PT=Time/Y=2021/M=03"`
Once youΓÇÖve modified the command, execute it like above. One the ingestion is complete (using monitoring option below) modify the command for the next year and month you want to ingest.
virtual-desktop Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/azure-monitor.md
Before you start using Azure Monitor for Azure Virtual Desktop, you'll need to s
Anyone monitoring Azure Monitor for Azure Virtual Desktop for your environment will also need the following read-access permissions: -- Read-access to the Azure subscriptions that hold your Azure Virtual Desktop resources
+- Read-access to the Azure resource groups that hold your Azure Virtual Desktop resources
- Read-access to the subscription's resource groups that hold your Azure Virtual Desktop session hosts - Read access to the Log Analytics workspace or workspaces
Now that youΓÇÖve configured Azure Monitor for your Azure Virtual Desktop enviro
- Check out our [glossary](azure-monitor-glossary.md) to learn more about terms and concepts related to Azure Monitor for Azure Virtual Desktop. - To estimate, measure, and manage your data storage costs, see [Estimate Azure Monitor costs](azure-monitor-costs.md). - If you encounter a problem, check out our [troubleshooting guide](troubleshoot-azure-monitor.md) for help and known issues.-- To see what's new in each version update, see [What's new in Azure Monitor for Azure Virtual Desktop](whats-new-azure-monitor.md).
+- To see what's new in each version update, see [What's new in Azure Monitor for Azure Virtual Desktop](whats-new-azure-monitor.md).
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection for Azure Virtual Desktop (preview). Previously updated : 03/15/2022 Last updated : 03/16/2022
to do these things:
To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups).
-4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM.
+4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE4QWrF) to install both the host native component and the multimedia redirection extensions for your internet browser on your Azure VM.
## Managing group policies for the multimedia redirection browser extension
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Here's what's changed in the Azure Virtual Desktop Agent:
- Fixes an issue with arithmetic overflow casting exceptions. - Updated the agent to now start the Azure Instance Metadata Service (IMDS) when the agent starts. - Fixes an issue that caused Sandero name pipe service start ups to be slow when the VM has no registration information.
- - Gneral bug fixes and agent improvements.
+ - General bug fixes and agent improvements.
- Version 1.0.4009.1500: This update was released in January 2022 and includes the following changes: - Added logging to better capture agent update telemetry. - Updated the agent's Azure Instance Metadata Service health check to be Azure Stack HCI-friendly
We've increased number of Azure Virtual Desktop application groups you can have
### Updates to required URLs
-We've updated the required URL list for Azure Virtual Desktop to accomodate Azure Virtual Desktop agent traffic. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/important-new-changes-in-required-urls/m-p/3094897#M8529).
+We've updated the required URL list for Azure Virtual Desktop to accommodate Azure Virtual Desktop agent traffic. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop/important-new-changes-in-required-urls/m-p/3094897#M8529).
## December 2021
virtual-machines Create Portal Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/create-portal-availability-zone.md
+
+ Title: Create zonal VMs with the Azure portal
+description: Create VMs in an availability zone with the Azure portal
+++ Last updated : 03/14/2022+++++
+# Create virtual machines in an availability zone using the Azure portal
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
+
+This article steps through using the Azure portal to create highly resilient virtual machines in [availability zones](../availability-zones/az-overview.md). Azure availability zones are physically separate locations within each Azure region that are tolerant to local failures. Use availability zones to protect your applications and data against unlikely datacenter failures.
+
+To use availability zones, create your virtual machines in a [supported Azure region](../availability-zones/az-region.md).
+
+Some users will now see the option to create VMs in multiple zones. If you see the following message, please use the **Preview** tab below.
++
+### [Standard](#tab/standard)
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+
+1. Click **Create a resource** > **Compute** > **Virtual machine**.
+
+3. Enter the virtual machine information. The user name and password or SSH key is used to sign in to the virtual machine.
+
+4. Choose a region such as East US 2 that supports availability zones.
+
+5. Under **Availability options**, select **Availability zone** dropdown.
+
+1. Under **Availability zone**, select a zone from the drop-down list.
+
+4. Choose a size for the VM. Select a recommended size, or filter based on features. Confirm the size is available in the zone you want to use.
+
+6. Finish filling in the information for your VM. When you are done, select **Review + create**.
+
+7. Once the information is verified, select **Create**.
+
+1. After the VM is created, you can see the availability zone listed in the **Essentials section** on the page for the VM.
++
+### [Preview](#tab/preview)
+
+1. Sign in to the Azure portal at https://portal.azure.com.
+
+1. Click **Create a resource** > **Compute** > **Virtual machine**.
+
+1. In the **Virtual machines** page, select **Create** and then **Virtual machine**. The **Create a virtual machine** page opens.
+
+1. In the **Basics** tab, under **Project details**, make sure the correct subscription is selected and then choose a resource group or create a new one.
+
+1. Under **Instance details**, type a name for the **Virtual machine name**.
+1. For **Availability options**, leave the default of **Availability zone**.
+1. For **Availability zone**, the drop-down defaults to *Zone 1*. If you choose multiple zones, a new VM will be created in each zone. For example, if you select all three zones, then three VMs will be created. The VM names are the original name you entered, with **-1**, **-2**, and **-3** appended to the name based on number of zones selected. If you want, you can edit each of the default VM names.
+
+ :::image type="content" source="media/zones/3-vm-names.png" alt-text="Screenshot showing that there are now 3 virtual machines that will be created.":::
+
+1. Complete the rest of the page as usual. If you want to create a load balancer, go to the **Networking** tab > **Load Balancing** > **Load balancing options**. You can choose either an Azure load balancer or an Application gateway.
+
+ For a **Azure load balancer**:
+
+ 1. You can select an existing load balancer or select **Create a load balancer**.
+ 2. To create a new load balancer, for **Load balancer name** type a load balancer name.
+ 3. Select the **Type** of load balancer, either Public or Internal.
+ 4. Select the **Protocol**, either **TCP** or **UDP**.
+ 5. You can leave the default **Port** and **Backend port**, or change them if needed. The backend port you select will be opened up on the Network Security Group (NSG) of the VM.
+ 6. When you are done, select **Create**.
+
+ For an **Application Gateway**:
+
+ 1. Select either an existing application gateway or **Create an application gateway**.
+ 2. To create a new gateway, type the name for the application gateway. The Application Gateway can load balance multiple applications. Consider naming the Application Gateway according to the workloads you wish to load balance, rather than specific to the virtual machine name.
+ 3. In **Routing rule**, type a rule name. The rule name should describe the workload you are load balancing.
+ 4. For HTTP load balancing, you can leave the defaults and then select **Create**. For HTTPS load balancing, you have two options:
+
+ - Upload a certificate and add the password (application gateway will manage certificate storage). For certificate name, type a friendly name for the certificate.
+ - Use a key vault (application gateway will pull a defined certificate from a defined key vault). Select your **Managed identity**, **Key Vault**, and **Certificate**.
+
+ > [!IMPORTANT]
+ > After the VMs and application gateway are deployed, log in to the VMs to ensure that either the application gateway certificate is uploaded onto the VMs or the domain name of the VM certificate matches with the domain name of the application gateway.
+
+ > [!NOTE]
+ > A separate subnet will be defined for Application Gateway upon creation. For more information, see [Application Gateway infrastructure configuration](../application-gateway/configuration-infrastructure.md).
+
+1. Leave the remaining defaults and then select the **Review + create** button at the bottom of the page.
+
+1. On the **Create a virtual machine** page, you can see the details about the VM you are about to create. When you are ready, select **Create**.
+
+1. If you are creating a Linux VM and the **Generate new key pair** window opens, select **Download private key and create resource**. Your key file will be download as **myKey.pem**.
+
+1. When the deployment is finished, select **Go to resource**.
++
+
+**Next steps**
+
+In this article, you learned how to create a VM in an availability zone. Learn more about [availability](availability.md) for Azure VMs.
virtual-machines Ephemeral Os Disks Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks-deploy.md
+
+ Title: Deploy Ephemeral OS disks
+description: Learn to deploy ephemeral OS disks for Azure VMs.
++++ Last updated : 07/23/2020+++++
+# How to deploy Ephemeral OS disks for Azure VMs
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
+
+This article shows you how to create a virtual machine or virtual machine scale sets with Ephemeral OS disks through Portal, ARM template deployment, CLI and PowerShell.
+
+## Portal
+
+In the Azure portal, you can choose to use ephemeral disks when deploying a virtual machine or virtual machine scale sets by opening the **Advanced** section of the **Disks** tab. For choosing placement of Ephemeral OS disk, select **OS cache placement** or **Temp disk placement**.
+
+![Screenshot showing the radio button for choosing to use an ephemeral OS disk](./media/virtual-machines-common-ephemeral/ephemeral-portal-temp.png)
+
+
+If the option for using an ephemeral disk or OS cache placement or Temp disk placement is greyed out, you might have selected a VM size that doesn't have a cache/temp size larger than the OS image or that doesn't support Premium storage. Go back to the **Basics** page and try choosing another VM size.
+
+## Scale set template deployment
+The process to create a scale set that uses an ephemeral OS disk is to add the `diffDiskSettings` property to the
+`Microsoft.Compute/virtualMachineScaleSets/virtualMachineProfile` resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement can be changed to `CacheDisk` for OS cache disk placement.
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachineScaleSets",
+ "name": "myScaleSet",
+ "location": "East US 2",
+ "apiVersion": "2019-12-01",
+ "sku": {
+ "name": "Standard_DS2_v2",
+ "capacity": "2"
+ },
+ "properties": {
+ "upgradePolicy": {
+ "mode": "Automatic"
+ },
+ "virtualMachineProfile": {
+ "storageProfile": {
+ "osDisk": {
+ "diffDiskSettings": {
+ "option": "Local" ,
+ "placement": "ResourceDisk"
+ },
+ "caching": "ReadOnly",
+ "createOption": "FromImage"
+ },
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "16.04-LTS",
+ "version": "latest"
+ }
+ },
+ "osProfile": {
+ "computerNamePrefix": "myvmss",
+ "adminUsername": "azureuser",
+ "adminPassword": "P@ssw0rd!"
+ }
+ }
+ }
+}
+```
+
+## VM template deployment
+You can deploy a VM with an ephemeral OS disk using a template. The process to create a VM that uses ephemeral OS disks is to add the `diffDiskSettings` property to Microsoft.Compute/virtualMachines resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement option can be changed to `CacheDisk` for OS cache disk placement.
+
+```json
+{
+ "type": "Microsoft.Compute/virtualMachines",
+ "name": "myVirtualMachine",
+ "location": "East US 2",
+ "apiVersion": "2019-12-01",
+ "properties": {
+ "storageProfile": {
+ "osDisk": {
+ "diffDiskSettings": {
+ "option": "Local" ,
+ "placement": "ResourceDisk"
+ },
+ "caching": "ReadOnly",
+ "createOption": "FromImage"
+ },
+ "imageReference": {
+ "publisher": "MicrosoftWindowsServer",
+ "offer": "WindowsServer",
+ "sku": "2016-Datacenter-smalldisk",
+ "version": "latest"
+ },
+ "hardwareProfile": {
+ "vmSize": "Standard_DS2_v2"
+ }
+ },
+ "osProfile": {
+ "computerNamePrefix": "myvirtualmachine",
+ "adminUsername": "azureuser",
+ "adminPassword": "P@ssw0rd!"
+ }
+ }
+ }
+```
+
+## CLI
+
+To use an ephemeral disk for a CLI VM deployment, set the `--ephemeral-os-disk` parameter in [az vm create](/cli/azure/vm#az_vm_create) to `true` and the `--ephemeral-os-disk-placement` parameter to `ResourceDisk` for temp disk placement or `CacheDisk` for cache disk placement and the `--os-disk-caching` parameter to `ReadOnly`.
+
+```azurecli-interactive
+az vm create \
+ --resource-group myResourceGroup \
+ --name myVM \
+ --image UbuntuLTS \
+ --ephemeral-os-disk true \
+ --ephemeral-os-disk-placement ResourceDisk \
+ --os-disk-caching ReadOnly \
+ --admin-username azureuser \
+ --generate-ssh-keys
+```
+
+For scale sets, you use the same `--ephemeral-os-disk true` parameter for [az-vmss-create](/cli/azure/vmss#az_vmss_create) and set the `--os-disk-caching` parameter to `ReadOnly` and the `--ephemeral-os-disk-placement` parameter to `ResourceDisk` for temp disk placement or `CacheDisk` for cache disk placement.
+
+## Reimage a VM using REST
+You can reimage a Virtual Machine instance with ephemeral OS disk using REST API as described below and via Azure portal by going to Overview pane of the VM. For scale sets, reimaging is already available through PowerShell, CLI, and the portal.
+
+```
+POST https://management.azure.com/subscriptions/{sub-
+id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?a pi-version=2019-12-01"
+```
+
+## PowerShell
+To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk`.
+```powershell
+Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -Caching ReadOnly
+
+```
+To use an ephemeral disk on cache disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `CacheDisk`.
+```PowerShell
+Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement CacheDisk -Caching ReadOnly
+```
+For scale set deployments, use the [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) cmdlet in your configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk` or `CacheDisk`.
+```PowerShell
+Set-AzVmssStorageProfile -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -OsDiskCaching ReadOnly
+```
+
+## Next steps
+For more information on [Ephemeral OS disk](ephemeral-os-disks.md).
virtual-machines Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/ephemeral-os-disks.md
Key differences between persistent and ephemeral OS disks:
| | Persistent OS Disk | Ephemeral OS Disk | ||||
-| **Size limit for OS disk** | 2 TiB | Cache size for the VM size or 2 TiB, whichever is smaller. For the **cache size in GiB**, see [DS](sizes-general.md), [ES](sizes-memory.md), [M](sizes-memory.md), [FS](sizes-compute.md), and [GS](sizes-previous-gen.md#gs-series) |
-| **VM sizes supported** | All | VM sizes that support Premium storage such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2,Bs, Dav4, Eav4 |
+| **Size limit for OS disk** | 2 TiB | Cache size or temp size for the VM size or 2040 GiB, whichever is smaller. For the **cache or temp size in GiB**, see [DS](sizes-general.md), [ES](sizes-memory.md), [M](sizes-memory.md), [FS](sizes-compute.md), and [GS](sizes-previous-gen.md#gs-series) |
+| **VM sizes supported** | All | VM sizes that support Premium storage such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 |
| **Disk type support**| Managed and unmanaged OS disk| Managed OS disk only| | **Region support**| All regions| All regions| | **Data persistence**| OS disk data written to OS disk are stored in Azure Storage| Data written to OS disk is stored on local VM storage and isn't persisted to Azure Storage. |
If you want to opt for **Temp disk placement**: Standard Ubuntu server image fro
Basic Linux and Windows Server images in the Marketplace that are denoted by `[smallsize]` tend to be around 30 GiB and can use most of the available VM sizes. Ephemeral disks also require that the VM size supports **Premium storage**. The sizes usually (but not always) have an `s` in the name, like DSv2 and EsV3. For more information, see [Azure VM sizes](sizes.md) for details around which sizes support Premium storage.
-## Ephemeral OS Disks can now be stored on temp/Resource disks
-Ephemeral OS disk can now be stored either in VM cache disk or VM temp/resource disk.
-This feature enables Ephemeral OS disks to be created for all the VMs, which don't have cache or have insufficient cache (such as Dav3, Dav4, Eav4, and Eav3) but has sufficient temp disk to host the Ephemeral OS disk.
+## Placement options for Ephemeral OS disks
+Ephemeral OS disk can be stored either on VM's OS cache disk or VM's temp/resource disk.
[DiffDiskPlacement](/rest/api/compute/virtualmachines/list#diffdiskplacement) is the new property that can be used to specify where you want to place the Ephemeral OS disk. With this feature, when a Windows VM is provisioned, we configure the pagefile to be located on the OS Disk.
-## Portal
-
-In the Azure portal, you can choose to use ephemeral disks when deploying a virtual machine or virtual machine scale sets by opening the **Advanced** section of the **Disks** tab. For choosing placement of Ephemeral OS disk, select **OS cache placement** or **Temp disk placement**.
-
-![Screenshot showing the radio button for choosing to use an ephemeral OS disk](./media/virtual-machines-common-ephemeral/ephemeral-portal-temp.png)
-
-
-If the option for using an ephemeral disk or OS cache placement or Temp disk placement is greyed out, you might have selected a VM size that doesn't have a cache/temp size larger than the OS image or that doesn't support Premium storage. Go back to the **Basics** page and try choosing another VM size.
-
-## Scale set template deployment
-The process to create a scale set that uses an ephemeral OS disk is to add the `diffDiskSettings` property to the
-`Microsoft.Compute/virtualMachineScaleSets/virtualMachineProfile` resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement can be changed to `CacheDisk` for OS cache disk placement.
-
-```json
-{
- "type": "Microsoft.Compute/virtualMachineScaleSets",
- "name": "myScaleSet",
- "location": "East US 2",
- "apiVersion": "2019-12-01",
- "sku": {
- "name": "Standard_DS2_v2",
- "capacity": "2"
- },
- "properties": {
- "upgradePolicy": {
- "mode": "Automatic"
- },
- "virtualMachineProfile": {
- "storageProfile": {
- "osDisk": {
- "diffDiskSettings": {
- "option": "Local" ,
- "placement": "ResourceDisk"
- },
- "caching": "ReadOnly",
- "createOption": "FromImage"
- },
- "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "16.04-LTS",
- "version": "latest"
- }
- },
- "osProfile": {
- "computerNamePrefix": "myvmss",
- "adminUsername": "azureuser",
- "adminPassword": "P@ssw0rd!"
- }
- }
- }
-}
-```
-
-## VM template deployment
-You can deploy a VM with an ephemeral OS disk using a template. The process to create a VM that uses ephemeral OS disks is to add the `diffDiskSettings` property to Microsoft.Compute/virtualMachines resource type in the template. Also, the caching policy must be set to `ReadOnly` for the ephemeral OS disk. placement option can be changed to `CacheDisk` for OS cache disk placement.
-
-```json
-{
- "type": "Microsoft.Compute/virtualMachines",
- "name": "myVirtualMachine",
- "location": "East US 2",
- "apiVersion": "2019-12-01",
- "properties": {
- "storageProfile": {
- "osDisk": {
- "diffDiskSettings": {
- "option": "Local" ,
- "placement": "ResourceDisk"
- },
- "caching": "ReadOnly",
- "createOption": "FromImage"
- },
- "imageReference": {
- "publisher": "MicrosoftWindowsServer",
- "offer": "WindowsServer",
- "sku": "2016-Datacenter-smalldisk",
- "version": "latest"
- },
- "hardwareProfile": {
- "vmSize": "Standard_DS2_v2"
- }
- },
- "osProfile": {
- "computerNamePrefix": "myvirtualmachine",
- "adminUsername": "azureuser",
- "adminPassword": "P@ssw0rd!"
- }
- }
- }
-```
-
-## CLI
-
-To use an ephemeral disk for a CLI VM deployment, set the `--ephemeral-os-disk` parameter in [az vm create](/cli/azure/vm#az_vm_create) to `true` and the `--ephemeral-os-disk-placement` parameter to `ResourceDisk` for temp disk placement or `CacheDisk` for cache disk placement and the `--os-disk-caching` parameter to `ReadOnly`.
-
-```azurecli-interactive
-az vm create \
- --resource-group myResourceGroup \
- --name myVM \
- --image UbuntuLTS \
- --ephemeral-os-disk true \
- --ephemeral-os-disk-placement ResourceDisk \
- --os-disk-caching ReadOnly \
- --admin-username azureuser \
- --generate-ssh-keys
-```
+## Unsupported features
+- Capturing VM images
+- Disk snapshots
+- Azure Disk Encryption
+- Azure Backup
+- Azure Site Recovery
+- OS Disk Swap
-For scale sets, you use the same `--ephemeral-os-disk true` parameter for [az-vmss-create](/cli/azure/vmss#az_vmss_create) and set the `--os-disk-caching` parameter to `ReadOnly` and the `--ephemeral-os-disk-placement` parameter to `ResourceDisk` for temp disk placement or `CacheDisk` for cache disk placement.
+ ## Trusted Launch for Ephemeral OS disks (Preview)
+Ephemeral OS disks can be created with Trusted launch. Not all VM sizes and regions are supported for trusted launch. Please check [limitations of trusted launch](trusted-launch.md#limitations) for supported sizes and regions.
+VM guest state (VMGS) is specific to trusted launch VMs. It is a blob that is managed by Azure and contains the unified extensible firmware interface (UEFI) secure boot signature databases and other security information. While using trusted launch by default **1 GiB** from the **OS cache** or **temp storage** based on the chosen placement option is reserved for VMGS.The lifecycle of the VMGS blob is tied to that of the OS Disk.
-## Reimage a VM using REST
-You can reimage a Virtual Machine instance with ephemeral OS disk using REST API as described below and via Azure portal by going to Overview pane of the VM. For scale sets, reimaging is already available through PowerShell, CLI, and the portal.
+For example, If you try to create a Trusted launch Ephemeral OS disk VM using OS image of size 56 GiB with VM size [Standard_DS4_v2](dv2-dsv2-series.md) using temp disk placement you would get an error as
+**"OS disk of Ephemeral VM with size greater than 55 GB is not allowed for VM size Standard_DS4_v2 when the DiffDiskPlacement is ResourceDisk."**
+This is because the temp storage for [Standard_DS4_v2](dv2-dsv2-series.md) is 56 GiB, and 1 GiB is reserved for VMGS when using trusted launch.
+For the same example above if you create a standard Ephemeral OS disk VM you would not get any errors and it would be a successful operation.
-```
-POST https://management.azure.com/subscriptions/{sub-
-id}/resourceGroups/{rgName}/providers/Microsoft.Compute/VirtualMachines/{vmName}/reimage?a pi-version=2019-12-01"
-```
-
-## PowerShell
-To use an ephemeral disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` and `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk`.
-```powershell
-Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -Caching ReadOnly
+> [!NOTE]
+>
+> While using ephemeral disks for Trusted Launch VMs, keys and secrets generated or sealed by the vTPM after VM creation may not be persisted for operations like reimaging and platform events like service healing.
+>
+For more information on [how to deploy a trusted launch VM](trusted-launch-portal.md)
-```
-To use an ephemeral disk on cache disk for a PowerShell VM deployment, use [Set-AzVMOSDisk](/powershell/module/az.compute/set-azvmosdisk) in your VM configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `CacheDisk`.
-```PowerShell
-Set-AzVMOSDisk -DiffDiskSetting Local -DiffDiskPlacement CacheDisk -Caching ReadOnly
-```
-For scale set deployments, use the [Set-AzVmssStorageProfile](/powershell/module/az.compute/set-azvmssstorageprofile) cmdlet in your configuration. Set the `-DiffDiskSetting` to `Local` , `-Caching` to `ReadOnly` and `-DiffDiskPlacement` to `ResourceDisk` or `CacheDisk`.
-```PowerShell
-Set-AzVmssStorageProfile -DiffDiskSetting Local -DiffDiskPlacement ResourceDisk -OsDiskCaching ReadOnly
-```
-
## Frequently asked questions **Q: What is the size of the local OS Disks?**
A: No, you can't have a mix of ephemeral and persistent OS disk instances within
A: Yes, you can create VMs with Ephemeral OS Disk using REST, Templates, PowerShell, and CLI.
-**Q: What features are not supported with ephemeral OS disk?**
-
-A: Ephemeral disks do not support:
-- Capturing VM images-- Disk snapshots -- Azure Disk Encryption -- Azure Backup-- Azure Site Recovery -- OS Disk Swap - > [!NOTE] > > Ephemeral disk will not be accessible through the portal. You will receive a "Resource not Found" or "404" error when accessing the ephemeral disk which is expected. > ## Next steps
-You can create a VM with an ephemeral OS disk using the [Azure CLI](/cli/azure/vm#az_vm_create).
+Create a VM with ephemeral OS disk using [Azure Portal/CLI/Powershell/ARM template](ephemeral-os-disks-deploy.md).
virtual-machines Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-windows.md
When logged in to a Windows VM, Task Manager can be used to examine running proc
## Upgrade the VM Agent
-The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time.
+The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. The new versions are stored in Azure Storage, so please ensure you don't have firewalls blocking access. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time.
## Windows Guest Agent Automatic Logs Collection Windows Guest Agent has a feature to automatically collect some logs. This feature is controller by the CollectGuestLogs.exe process.
virtual-machines Freebsd Intro On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/freebsd-intro-on-azure.md
Microsoft Corporation is making images of FreeBSD available on Azure with the [A
- FreeBSD 10.4 on the Azure Marketplace - FreeBSD 11.2 on the Azure Marketplace
+- FreeBSD 11.3 on the Azure Marketplace
- FreeBSD 12.0 on the Azure Marketplace
+The following FreeBSD versions also include the [Azure VM Guest Agent](https://github.com/Azure/WALinuxAgent/), however, they are offered as images by the FreeBSD Foundation:
+- FreeBSD 11.4 on the Azure Marketplace
+- FreeBSD 12.2 on the Azure Marketplace
+- FreeBSD 13.0 on the Azure Marketplace
+ The agent is responsible for communication between the FreeBSD VM and the Azure fabric for operations such as provisioning the VM on first use (user name, password or SSH key, host name, etc.) and enabling functionality for selective VM extensions. As for future versions of FreeBSD, the strategy is to stay current and make the latest releases available shortly after they are published by the FreeBSD release engineering team.
virtual-machines Quick Create Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/quick-create-bicep.md
+
+ Title: 'Quickstart: Use a Bicep file to create an Ubuntu Linux VM'
+description: In this quickstart, you learn how to use a Bicep file to create a Linux virtual machine
+++++ Last updated : 03/10/2022++
+tags: azure-resource-manager, bicep
++
+# Quickstart: Create an Ubuntu Linux virtual machine using a Bicep file
+
+**Applies to:** :heavy_check_mark: Linux VMs
+
+This quickstart shows you how to use a Bicep file to deploy an Ubuntu Linux virtual machine (VM) in Azure.
++
+## Prerequisites
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/vm-simple-linux/).
++
+Several resources are defined in the Bicep file:
+
+- [**Microsoft.Network/virtualNetworks/subnets**](/azure/templates/Microsoft.Network/virtualNetworks/subnets): create a subnet.
+- [**Microsoft.Storage/storageAccounts**](/azure/templates/Microsoft.Storage/storageAccounts): create a storage account.
+- [**Microsoft.Network/networkInterfaces**](/azure/templates/Microsoft.Network/networkInterfaces): create a NIC.
+- [**Microsoft.Network/networkSecurityGroups**](/azure/templates/Microsoft.Network/networkSecurityGroups): create a network security group.
+- [**Microsoft.Network/virtualNetworks**](/azure/templates/Microsoft.Network/virtualNetworks): create a virtual network.
+- [**Microsoft.Network/publicIPAddresses**](/azure/templates/Microsoft.Network/publicIPAddresses): create a public IP address.
+- [**Microsoft.Compute/virtualMachines**](/azure/templates/Microsoft.Compute/virtualMachines): create a virtual machine.
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters adminUsername=<admin-username>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -adminUsername "<admin-username>"
+ ```
+
+
+
+ > [!NOTE]
+ > Replace **\<admin-username\>** with a unique username. You'll also be prompted to enter adminPasswordOrKey.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az resource list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzResource -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to delete the VM and all of the resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+In this quickstart, you deployed a simple virtual machine using a Bicep file. To learn more about Azure virtual machines, continue to the tutorial for Linux VMs.
+
+> [!div class="nextstepaction"]
+> [Azure Linux virtual machine tutorials](./tutorial-manage-vm.md)
virtual-machines Lsv2 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv2-series.md
The Lsv2-series features high throughput, low latency, directly mapped local NVM
[VM Generation Support](generation-2.md): Generation 1 and 2<br> Bursting: Supported<br> [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md): Supported<br>
-[Ephemeral OS Disks](ephemeral-os-disks.md): Not Supported <br>
+[Ephemeral OS Disks](ephemeral-os-disks.md): Supported <br>
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
Bursting: Supported<br>
<sup>1</sup> Lsv2-series VMs have a standard SCSI based temp resource disk for OS paging/swap file use (D: on Windows, /dev/sdb on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs (e.g. Standard_L80s_v2 provides 800 GiB at 40,000 IOPS and 800 MBPS). This ensures the NVMe drives can be fully dedicated to application use. This disk is Ephemeral, and all data will be lost on stop/deallocate.
-<sup>2</sup> Local NVMe disks are ephemeral, data will be lost on these disks if you stop/deallocate your VM. Local NVMe disk aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
+<sup>2</sup> Local NVMe disks are ephemeral, data will be lost on these disks if you stop/deallocate your VM. Local NVMe disks aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
<sup>3</sup> Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Achieving maximum performance requires using either the latest WS2019 build or Ubuntu 18.04 or 16.04 from the Azure Marketplace. Write performance varies based on IO size, drive load, and capacity utilization.
Bursting: Supported<br>
Pricing Calculator: [Pricing Calculator](https://azure.microsoft.com/pricing/calculator/)
-More information on Disks Types : [Disk Types](./disks-types.md#ultra-disks)
+More information on Disks Types: [Disk Types](./disks-types.md#ultra-disks)
## Next steps
virtual-machines Nc A100 V4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nc-a100-v4-series.md
Due to increased GPU memory I/O footprint, the NC A100 v4 requires the use of [G
Key Features: - [Premium Storage](premium-storage-performance.md) -- [Premium Storage caching](premium-storage-performance.md) -- [Ultra Disks](disks-types.md#ultra-disks)
+- [Premium Storage caching](premium-storage-performance.md)
- [VM Generation 2](generation-2.md) -- [Accelerated Networking](../virtual-network/create-vm-accelerated-networking-cli.md) - [Ephemeral OS Disks](ephemeral-os-disks.md) - NVIDIA NVLink Interconnect
These features are not supported:[Live Migration](maintenance-and-updates.md), [
| Size | vCPU | Memory: GiB | Temp Storage (with NVMe): GiB | GPU | GPU Memory: GiB | Max data disks | Max uncached disk throughput: IOPS / MBps | Max NICs/network bandwidth (Mbps) | ||||||||||
-| Standard_NC24ads_A100_v4 | 24 | 220 | 1123 | 1 | 80 | 12 | 20000/200 | 4/20,000 |
-| Standard_NC48ads_A100_v4 | 48 | 440 | 2246 | 2 | 160 | 24 | 40000/400 | 8/40,000 |
-| Standard_NC96ads_A100_v4 | 96 | 880 | 4492 | 4 | 320 | 32 | 80000/800 | 8/80,000 |
+| Standard_NC24ads_A100_v4 | 24 | 220 | 1123 | 1 | 80 | 12 | 30000/1000 | 2/20,000 |
+| Standard_NC48ads_A100_v4 | 48 | 440 | 2246 | 2 | 160 | 24 | 60000/2000 | 4/40,000 |
+| Standard_NC96ads_A100_v4 | 96 | 880 | 4492 | 4 | 320 | 32 | 120000/4000 | 8/80,000 |
1 GPU = one A100 card
virtual-machines Create Portal Availability Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/create-portal-availability-zone.md
- Title: Create a zoned VM with the Azure portal
-description: Create a VM in an availability zone with the Azure portal
--- Previously updated : 5/10/2021-----
-# Create a virtual machine in an availability zone using the Azure portal
-
-**Applies to:** :heavy_check_mark: Windows VMs
-
-This article steps through using the Azure portal to create a virtual machine in an Azure availability zone. An [availability zone](../../availability-zones/az-overview.md) is a physically separate zone in an Azure region. Use availability zones to protect your apps and data from an unlikely failure or loss of an entire datacenter.
-
-To use an availability zone, create your virtual machine in a [supported Azure region](../../availability-zones/az-region.md).
-
-## Sign in to Azure
-
-1. Sign in to the Azure portal at https://portal.azure.com.
-
-1. Click **Create a resource** > **Compute** > **Virtual machine**.
-
-3. Enter the virtual machine information. The user name and password is used to sign in to the virtual machine. The password must be at least 12 characters long and meet the [defined complexity requirements](faq.yml#what-are-the-password-requirements-when-creating-a-vm-).
-
-4. Choose a region such as East US 2 that supports availability zones.
-
-5. Under **Availability options**, select **Availability zone** dropdown.
-
-1. Under **Availability zone**, select a zone from the drop-down list.
-
-4. Choose a size for the VM. Select a recommended size, or filter based on features. Confirm the size is available in the zone you want to use.
-
-6. Finish filling in the information for your VM. When you are done, select **Review + create**.
-
-7. Once the information is verified, select **Create**.
-
-1. After the VM is created, you can see the availability zone listed in the **Essentials section** on the page for the VM.
-
-## Confirm zone for managed disk and IP address
-
-When the VM is deployed in an availability zone, a managed disk for the VM is created in the same availability zone. By default, a public IP address is also created in that zone.
-
-You can confirm the zone settings for these resources in the portal.
-
-1. Select **Disks** from the left menu and then select the OS disk. The page for the disk includes details about the location and availability zone of the disk.
-
-1. Back on the page for the VM, select the public IP address. In the left menu, select **Properties**. The properties page includes details about the location and availability zone of the public IP address.
-
-
-## Next steps
-
-In this article, you learned how to create a VM in an availability zone. Learn more about [availability](../availability.md) for Azure VMs.
virtual-machines Jboss Eap Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-marketplace-image.md
These offer plans create all the Azure compute resources to run JBoss EAP setup
### 5. Use an External Load Balancer (ELB) to access your RHEL VM/virtual machine scale sets
-1. [Create a Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard#create-load-balancer-resources) to access the ports of the RHEL VM. Provide the required details to deploy the external Load Balancer and leave other configurations as default. Leave the SKU as Basic for the ELB configuration.
-2. Add Load Balancer rules - once the Load balancer has been created successfully, [create Load Balancer resources](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard#create-load-balancer-resources), then add Load Balancer rules to access ports 8080 and 9990 of the RHEL VM.
+1. [Create a Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer) to access the ports of the RHEL VM. Provide the required details to deploy the external Load Balancer and leave other configurations as default. Leave the SKU as Basic for the ELB configuration.
+2. Add Load Balancer rules - once the Load balancer has been created successfully, [create Load Balancer resources](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer), then add Load Balancer rules to access ports 8080 and 9990 of the RHEL VM.
3. Add the RHEL VM to the backend pool of the Load Balancer - click on *Backend pools* under settings section and then select the backend pool you created in the step above. Select the VM corresponding to the option *Associated to* and then add the RHEL VM. 4. To obtain the Public IP of the Load Balancer - go to the Load Balancer overview page and copy the Public IP of the Load Balancer. 5. To view the JBoss EAP on Azure web page - open a web browser then go to *http://<PUBLIC_IP_LoadBalancer>:8080/* and you should see the default EAP welcome page.
virtual-machines Jboss Eap On Azure Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/jboss-eap-on-azure-migration.md
You can expose the application using the following methods which is suitable for
* [Create a Jump VM in the Same Virtual Network (VNet)](../../windows/quick-create-portal.md#create-virtual-machine) in a different subnet (new subnet) in the same VNet and access the server via a Jump VM. This Jump VM can be used to expose the application. * [Create a Jump VM with VNet Peering](../../windows/quick-create-portal.md#create-virtual-machine) in a different Virtual Network and access the server and expose the application using [Virtual Network Peering](../../../virtual-network/tutorial-connect-virtual-networks-portal.md#peer-virtual-networks). * Expose the application using an [Application Gateway](../../../application-gateway/quick-create-portal.md#create-an-application-gateway)
-* Expose the application using an [External Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md?tabs=option-1-create-load-balancer-standard#create-load-balancer-resources) (ELB).
+* Expose the application using an [External Load Balancer](../../../load-balancer/quickstart-load-balancer-standard-public-portal.md#create-load-balancer) (ELB).
## Post-migration
virtual-machines Dbms_Guide_Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/dbms_guide_ibm.md
vm-linux Previously updated : 02/09/2022 Last updated : 03/15/2022
The following SAP Notes are related to SAP on Azure regarding the area covered i
| [2002167] |Red Hat Enterprise Linux 7.x: Installation and Upgrade | | [1597355] |Swap-space recommendation for Linux |
-As a pr-read to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) plus other guides in the [SAP workload on Azure documentation](./get-started.md).
+As a pre-read to this document, you should have read the document [Considerations for Azure Virtual Machines DBMS deployment for SAP workload](dbms_guide_general.md) plus other guides in the [SAP workload on Azure documentation](./get-started.md).
## IBM Db2 for Linux, UNIX, and Windows Version Support
IBM Db2 for SAP NetWeaver Applications is supported on any VM type listed in SAP
Following is a baseline configuration for various sizes and uses of SAP on Db2 deployments from small to large. The list is based on Azure premium storage. However, Azure Ultra disk is fully supported with Db2 as well and can be used as well. Use the values for capacity, burst throughput, and burst IOPS to define the Ultra disk configuration. You can limit the IOPS for the /db2/```<SID>```/log_dir at around 5000 IOPS. #### Extra small SAP system: database size 50 - 200 GB: example Solution Manager
-| VM Name / Size |Db2 mount point |Azure Premium Disk |NR of Disks |IOPS |Throughput [MB/s] |Size [GB] |Burst IOPS |Burst Thr [GB] | Stripe size | Caching |
+| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching |
| | | | :: | : | : | : | : | : | : | : | |E4ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || |
-|vCPU: 4 |/db2/```<SID>```/sapdata |P10 |2 |1,000 |200 |256 |7,000 |340 |256 KB |ReadOnly |
+|vCPU: 4 |/db2/```<SID>```/sapdata |P10 |2 |1,000 |200 |256 |7,000 |340 |256<br />KB |ReadOnly |
|RAM: 32 GiB |/db2/```<SID>```/saptmp |P6 |1 |240 |50 |128 |3,500 |170 | ||
-| |/db2/```<SID>```/log_dir |P6 |2 |480 |100 |128 |7,000 |340 |64 KB ||
+| |/db2/```<SID>```/log_dir |P6 |2 |480 |100 |128 |7,000 |340 |64<br />KB ||
| |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 || | #### Small SAP system: database size 200 - 750 GB: small Business Suite
-| VM Name / Size |Db2 mount point |Azure Premium Disk |NR of Disks |IOPS |Throughput [MB/s] |Size [GB] |Burst IOPS |Burst Thr [GB] | Stripe size | Caching |
+| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching |
| | | | :: | : | : | : | : | : | : | : | |E16ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | |vCPU: 16 |/db2/```<SID>```/sapdata |P15 |4 |4,400 |500 |1.024 |14,000 |680 |256 KB |ReadOnly | |RAM: 128 GiB |/db2/```<SID>```/saptmp |P6 |2 |480 |100 |128 |7,000 |340 |128 KB ||
-| |/db2/```<SID>```/log_dir |P15 |2 |2,200 |250 |512 |7,000 |340 |64 KB ||
+| |/db2/```<SID>```/log_dir |P15 |2 |2,200 |250 |512 |7,000 |340 |64<br />KB ||
| |/db2/```<SID>```/offline_log_dir |P10 |1 |500 |100 |128 |3,500 |170 ||| #### Medium SAP system: database size 500 - 1000 GB: small Business Suite
-| VM Name / Size |Db2 mount point |Azure Premium Disk |NR of Disks |IOPS |Throughput [MB/s] |Size [GB] |Burst IOPS |Burst Thr [GB] | Stripe size | Caching |
+| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching |
| | | | :: | : | : | : | : | : | : | : | |E32ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | |vCPU: 32 |/db2/```<SID>```/sapdata |P30 |2 |10,000 |400 |2.048 |10,000 |400 |256 KB |ReadOnly | |RAM: 256 GiB |/db2/```<SID>```/saptmp |P10 |2 |1,000 |200 |256 |7,000 |340 |128 KB ||
-| |/db2/```<SID>```/log_dir |P20 |2 |4,600 |300 |1.024 |7,000 |340 |64 KB ||
+| |/db2/```<SID>```/log_dir |P20 |2 |4,600 |300 |1.024 |7,000 |340 |64<br />KB ||
| |/db2/```<SID>```/offline_log_dir |P15 |1 |1,100 |125 |256 |3,500 |170 ||| #### Large SAP system: database size 750 - 2000 GB: Business Suite
-| VM Name / Size |Db2 mount point |Azure Premium Disk |NR of Disks |IOPS |Throughput [MB/s] |Size [GB] |Burst IOPS |Burst Thr [GB] | Stripe size | Caching |
+| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching |
| | | | :: | : | : | : | : | : | : | : | |E64ds_v4 |/db2 |P6 |1 |240 |50 |64 |3,500 |170 || | |vCPU: 64 |/db2/```<SID>```/sapdata |P30 |4 |20,000 |800 |4.096 |20,000 |800 |256 KB |ReadOnly | |RAM: 504 GiB |/db2/```<SID>```/saptmp |P15 |2 |2,200 |250 |512 |7,000 |340 |128 KB ||
-| |/db2/```<SID>```/log_dir |P20 |4 |9,200 |600 |2.048 |14,000 |680 |64 KB ||
+| |/db2/```<SID>```/log_dir |P20 |4 |9,200 |600 |2.048 |14,000 |680 |64<br />KB ||
| |/db2/```<SID>```/offline_log_dir |P20 |1 |2,300 |150 |512 |3,500 |170 || | #### Large multi-terabyte SAP system: database size 2 TB+: Global Business Suite system
-| VM Name / Size |Db2 mount point |Azure Premium Disk |NR of Disks |IOPS |Throughput [MB/s] |Size [GB] |Burst IOPS |Burst Thr [GB] | Stripe size | Caching |
+| VM Name / Size |Db2 mount point |Azure Premium Disk |# of Disks |IOPS |Through-<br />put [MB/s] |Size [GB] |Burst IOPS |Burst Through-<br />put [GB] | Stripe size | Caching |
| | | | :: | : | : | : | : | : | : | : | |M128s |/db2 |P10 |1 |500 |100 |128 |3,500 |170 || | |vCPU: 128 |/db2/```<SID>```/sapdata |P40 |4 |30,000 |1.000 |8.192 |30,000 |1.000 |256 KB |ReadOnly | |RAM: 2048 GiB |/db2/```<SID>```/saptmp |P20 |2 |4,600 |300 |1.024 |7,000 |340 |128 KB ||
-| |/db2/```<SID>```/log_dir |P30 |4 |20,000 |800 |4.096 |20,000 |800 |64 KB |WriteAccelerator |
+| |/db2/```<SID>```/log_dir |P30 |4 |20,000 |800 |4.096 |20,000 |800 |64<br />KB |Write-<br />Accelerator |
| |/db2/```<SID>```/offline_log_dir |P30 |1 |5,000 |200 |1.024 |5,000 |200 || |
vi /etc/idmapd.conf
Nobody-User = nobody Nobody-Group = nobody
-mount -t nfs -o rw,hard,sync,rsize=1048576,wsize=1048576,sec=sys,vers=4.1,tcp 172.17.10.4:/db2shared /mnt
+mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2shared /mnt
mkdir -p /db2/Software /db2/AN1/saptmp /usr/sap/<SID> /sapmnt/<SID> /home/<sid>adm /db2/db2<sid> /db2/<SID>/db2_software mkdir -p /mnt/Software /mnt/saptmp /mnt/usr_sap /mnt/sapmnt /mnt/<sid>_home /mnt/db2_software /mnt/db2<sid> umount /mnt
-mount -t nfs -o rw,hard,sync,rsize=1048576,wsize=1048576,sec=sys,vers=4.1,tcp 172.17.10.4:/db2data /mnt
+mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2data /mnt
mkdir -p /db2/AN1/sapdata/sapdata1 /db2/AN1/sapdata/sapdata2 /db2/AN1/sapdata/sapdata3 /db2/AN1/sapdata/sapdata4 mkdir -p /mnt/sapdata1 /mnt/sapdata2 /mnt/sapdata3 /mnt/sapdata4 umount /mnt
-mount -t nfs -o rw,hard,sync,rsize=1048576,wsize=1048576,sec=sys,vers=4.1,tcp 172.17.10.4:/db2log /mnt
+mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2log /mnt
mkdir /db2/AN1/log_dir mkdir /mnt/log_dir umount /mnt
-mount -t nfs -o rw,hard,sync,rsize=1048576,wsize=1048576,sec=sys,vers=4.1,tcp 172.17.10.4:/db2backup /mnt
+mount -t nfs -o rw,hard,sync,rsize=262144,wsize=262144,sec=sys,vers=4.1,tcp 172.17.10.4:/db2backup /mnt
mkdir /db2/AN1/backup mkdir /mnt/backup mkdir /db2/AN1/offline_log_dir /db2/AN1/db2dump
virtual-machines Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/get-started.md
ms.assetid: ad8e5c75-0cf6-4564-ae62-ea1246b4e5f2
vm-linux Previously updated : 03/01/2022 Last updated : 03/15/2022
In this section, you find documents about Microsoft Power BI integration into SA
## Change Log
+- March 15, 2022: Corrected rsize and wsize mount option settings for ANF in [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms_guide_ibm.md)
- March 1, 2022: Corrected note about database snapshots with multiple database containers in [SAP HANA Large Instances high availability and disaster recovery on Azure](./hana-overview-high-availability-disaster-recovery.md) - February 28, 2022: Added E(d)sv5 VM storage configurations to [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - February 13, 2022: Corrected broken links to HANA hardware directory in the following documents: SAP Business One on Azure Virtual Machines, Available SKUs for HANA Large Instances, Certification of SAP HANA on Azure (Large Instances), Installation of SAP HANA on Azure virtual machines, SAP workload planning and deployment checklist, SAP HANA infrastructure configurations and operations on Azure, SAP HANA on Azure Large Instance migration to Azure Virtual Machines, Install and configure SAP HANA (Large Instances) ,on Azure, High availability of SAP HANA scale-out system on Red Hat Enterprise Linux, High availability for SAP HANA scale-out system with HSR on SUSE Linux Enterprise Server, High availability of SAP HANA on Azure VMs on SUSE Linux Enterprise Server, Deploy a SAP HANA scale-out system with standby node on Azure VMs by using Azure NetApp Files on SUSE Linux Enterprise Server, SAP workload on Azure virtual machine supported scenarios, What SAP software is supported for Azure deployments